At the May 2026 MarTech Conference, practitioners from agencies, SaaS companies, and creative consultancies converged on a single conclusion: the era of chasing AI tools is over, and the era of building AI agents has begun. The question is no longer which AI feature to test next — it’s how to redesign your marketing organization around agentic workflows that permanently recover time for the strategic work that actually moves the needle.
What Happened
The MarTech Conference’s May 2026 panel “AI + Human Ingenuity: Where Creative and Technical Teams Meet” brought together practitioners who are past the experimentation phase and into operational deployment. Moderated by Molly St. Louis of EM Marketing, the conversation featured Greg Boone from full-service digital agency Walk West, Peter Isaacson of conversation intelligence platform Invoca, and Kate Roberts from Cella by Randstad Digital — the creative resourcing and workforce consultancy firm.
The central diagnosis coming out of that room: most marketing organizations are stuck at stage one of AI adoption, and the gap between stage one and stage two is where competitive advantage is being decided right now.
Isaacson, drawing on his operational experience at Invoca, structured the conversation around a three-stage AI maturity framework that became the organizing spine of the entire session:
Stage 1 — Content and Discovery: Teams are using generative AI for search and copywriting. This is the copy-paste-ChatGPT era — useful, defensible as a starting point, but nowhere near transformational. The workflow hasn’t changed; the tool has. A copywriter using AI to draft faster is still running the same linear process, just faster at one node.
Stage 2 — Agentic Synthesis: Teams build autonomous agents that gather data, analyze it, and surface actionable intelligence with minimal human intervention. This is where Isaacson said “the path forward lies.” Agents that monitor signals, synthesize cross-system data, and output structured recommendations or completed artifacts — not just tools that respond to prompts.
Stage 3 — Workflow Redesign: Full organizational restructuring around AI-native processes. Not many teams are operating here yet, but that’s the endpoint the industry is moving toward — where the org chart, team structure, and project management approach are all reconceived around what agents can handle autonomously.
What made Isaacson’s framework land harder than most conference frameworks do: when moderator St. Louis polled the room, 46% of the audience reported still being focused on stage one — content generation. Less than half a year after the agentic AI wave became impossible to ignore, the majority of practitioners at a dedicated marketing technology conference were still experimenting with text generation. That is not a condemnation of the individuals in the room. It is a structural observation about how slowly organizations move relative to how fast the underlying tooling is advancing.
Roberts, representing the creative and workforce side through Cella by Randstad Digital, brought a complementary perspective on where the friction actually lives inside creative teams. Her focus was the administrative overhead quietly destroying creative output — the time that skilled creatives spend on project briefs, status updates, tooling administration, and first-draft structural scaffolding before any genuine creative judgment is applied. She pointed specifically to AI integration inside tools already embedded in creative team workflows: Jira for project tracking, Workfront for resource and campaign management, and Figma for design production.
Her most concrete example: AI-powered wireframing in Figma takes a team from a blank screen to a client-ready prototype in minutes. Not hours — minutes. The productivity differential is significant, but Roberts framed the deeper value correctly. As she put it, “Who’s not excited to get time back?” — positioning AI adoption not as a threat to creative professionals but as a reclamation of the creative time that administrative drag has been steadily consuming, as reported by MarTech’s coverage of the session.
Boone brought the most operationally concrete recommendations of the panel. Walk West — a full-service digital agency — has made generative AI certification mandatory for all staff. This is not optional training or a suggested resource library. It is an organizational requirement that creates a shared language and baseline capability spanning both creative and technical functions simultaneously. He also introduced the concept of “transformation challenges”: structured organizational exercises designed to surface the non-value-adding tasks consuming team bandwidth, so those tasks can be systematically identified, piloted with AI assistance, and eliminated. Critically, Boone emphasized collaborating cross-functionally rather than optimizing in silos, because “all teams are navigating AI adoption simultaneously,” as MarTech reported.
Taken together, the three panelists drew a clear arc: from experimenting with AI tools in isolation, to deploying agents that automate the analytical and administrative layers, to redesigning how work gets structured so human ingenuity is protected and directed toward outcomes that require genuine judgment, creativity, and contextual intelligence that agents cannot replicate.
Why This Matters
The stage one trap is real, and it is expensive in ways that don’t show up on a P&L until the competitive gap has already become hard to close.
Here is the mechanism: when marketing teams treat AI as a collection of individual productivity tools — one for copy, one for image generation, one for scheduling, one for analytics summarization — they get micro-gains but never fundamentally change organizational throughput. A copywriter who uses AI to draft faster is still a copywriter operating inside the same linear workflow. The bottleneck moves from “writing the draft” to “reviewing, briefing, revising, routing for approval” — but the underlying workflow architecture has not changed. You have improved one node in a process without improving the process itself. The ceiling on individual productivity improvement hits quickly.
Agentic AI changes the architecture itself. An agent doesn’t wait for a human to initiate a task. It monitors incoming signals, pulls data from connected systems, synthesizes across sources, and outputs a recommendation or a completed artifact. The human reviews and approves rather than builds and produces from scratch. That is a qualitatively different operating model — and it explains why Isaacson’s framing of stage two as the forward path matters beyond conference-panel optimism. The teams operating stage-two agentic workflows are compounding efficiency gains rather than accumulating individual productivity hacks.
For agencies, the implications are particularly acute because the creative-technical divide has always been a structural friction point. Account teams and creatives speak the language of campaigns, audiences, and brand strategy. Technical teams speak the language of data pipelines, API integrations, and conversion architecture. These two groups have historically needed translation layers — project managers, operations leads, account coordinators — to communicate at handoff points. Autonomous agents can sit at that intersection without requiring a human translator. An agent can translate campaign performance data into an actionable creative brief. An agent can surface audience signal data into wireframe requirements. An agent can route project status updates across Jira, Workfront, and Asana simultaneously without a project manager toggling between three systems. This is what Roberts was pointing to when she named those specific platforms: not new AI-native tools, but AI embedded into the systems where work already happens and where the friction between creative and technical functions is most costly.
The consumer-side data adds pressure to get this right. According to Prophet’s 2026 AI-Powered Consumer Report, cited by MarTech, 73% of consumers now use generative AI — up from 45% in 2024. But consumer excitement about AI has declined 7%, and the belief that AI will “handle most decisions” has dropped 30%. Most critically for brand marketers: 71% of consumers worry about AI inaccuracies and misinformation, and 62% express frustration when companies eliminate human support entirely. The market is entering what Gartner characterizes as the “trough of disillusionment” for AI, where the initial novelty premium disappears and trust becomes the operative variable.
The commercial reading of that data is not “slow down AI adoption.” It is “deploy AI in the right places.” Internal workflow AI — agents that make your team faster, smarter, and more capable — creates efficiency without triggering consumer trust erosion. Customer-facing AI automation that replaces human judgment and interaction runs directly into the 62% who push back when brands eliminate the human from the equation. The marketing teams reading those signals correctly are doing exactly what the MarTech panelists were articulating: use AI to free up humans for the interactions that require trust, creativity, and contextual judgment. That is both the ethical framing and the commercially correct one — they are the same argument.
The teams who will gain durable advantage over the next 18 months are not the ones with the most AI subscriptions. They are the ones who have moved from tool-collecting to agent-building, from personal productivity improvements to redesigned workflow architecture that changes what the organization can produce and how fast it can produce it.
The Data
The data picture emerging from this panel, the consumer research from Prophet, and MarTech’s coverage of organizational knowledge accessibility reveals a marketing industry in a bifurcated state: a large majority still in the AI tool era, and a meaningful minority beginning to build the agentic and organizational intelligence infrastructure that will define best practice through 2027 and beyond.
AI Maturity Stage Distribution — 2026 Marketing Teams
| AI Maturity Stage | Description | Est. % of Teams | Primary Bottleneck |
|---|---|---|---|
| Stage 1: Content & Discovery | GenAI for copy, search, images — workflow unchanged | 46% (live conference poll) | Micro-gains only; no architectural change to throughput |
| Stage 2: Agentic Synthesis | Autonomous agents for data gathering, analysis, synthesis | ~40% (estimated) | Integration complexity; cross-functional skills gaps |
| Stage 3: Workflow Redesign | Full org restructure around AI-native processes and team structures | ~14% (estimated) | Change management; leadership alignment; org design |
Source: MarTech, “AI + Human Ingenuity: Where Creative and Technical Teams Meet” (May 2026)
Consumer AI Sentiment Shifts, 2024–2026
| Metric | 2024 | 2026 | Direction |
|---|---|---|---|
| Consumers actively using generative AI | 45% | 73% | ↑ +28 points |
| Consumer excitement about AI | Baseline | −7% from baseline | ↓ Declining |
| Belief AI will “handle most decisions” | Baseline | −30% from baseline | ↓ Sharply declining |
| Consumers worried about AI inaccuracies/misinformation | — | 71% | — |
| Consumers frustrated by eliminated human support | — | 62% | — |
Source: Prophet’s 2026 AI-Powered Consumer Report, via MarTech
The consumer sentiment table reframes the internal AI workflow argument in a commercially concrete way. If 62% of consumers push back when brands remove the human from the loop entirely, then the highest-value deployment of AI is internal — freeing humans to be more fully present in the external interactions that matter most. The marketing teams directing AI investment at internal workflow capability rather than customer-facing automation are likely to win on both efficiency and brand trust simultaneously, which is a rare combination of outcomes.
The organizational data picture compounds the urgency. As MarTech reported in “You Could Already Have the Data AI Needs to Deliver Value”, most enterprises are sitting on years of unstructured marketing intelligence — brand trackers, campaign test results, customer interviews, regional learnings — that teams cannot access at the moment decisions need to be made. The challenge, as MarTech stated directly, “is no longer collecting information, but making organizational knowledge accessible when decisions happen.” Marketing teams generate “some of the richest unstructured data in the organization,” yet it remains fragmented across disconnected systems, queryable only by the people who happen to remember it exists. Agentic retrieval systems that surface this institutional intelligence on demand represent the second major unlocking event — after AI-powered production tools — that moves teams from stage one to genuine stage-two capability.
Real-World Use Cases
Use Case 1: The Agency That Mandated AI Certification Across All Functions
Scenario: A full-service digital agency with 55 staff spanning creative, strategy, SEO, and paid media. AI tool usage is fragmented — some copywriters use generative AI daily, some account managers have never configured a prompt beyond a basic chatbot query. Output quality varies between team members depending on individual AI fluency. There is no shared vocabulary, no consistent governance framework, and no reliable baseline for what AI-assisted work looks like for this team. Clients are receiving inconsistent results depending on which staffer handles their account.
Implementation: Following the Walk West model described by Greg Boone at the MarTech Conference, the agency institutes mandatory generative AI certification for all staff within 90 days. This is not a technical bootcamp — it is a structured learning curriculum covering prompting fundamentals, basic agent configuration, output review and quality governance, and organizational policies for AI-assisted client deliverables. Simultaneously, the agency runs cross-departmental “transformation challenges”: each team maps their highest-frequency, highest-administrative-burden recurring tasks, pilots an AI-assisted alternative, documents what was eliminated and what time was recovered, and shares results in a monthly internal showcase. Account managers learn what creative is automating. Developers discover what strategy is offloading. Everyone discovers what’s possible beyond their immediate function and develops a shared sense of what’s feasible.
Expected Outcome: Within 60 days, a shared baseline AI capability means account managers can configure simple data-gathering agents without waiting on a developer sprint. Creative teams stop spending hours on first-draft structural scaffolding and redirect that capacity to conceptual work and strategic differentiation — the work they were hired to do. Cross-departmental knowledge transfer accelerates because every conversation about workflow improvement is conducted in a shared language. Conservative estimate based on similar agency-side implementations: 15–20% reduction in non-billable administrative overhead per staff member. That efficiency gain flows directly to margin or to additional client capacity without a corresponding headcount increase.
Use Case 2: From Blank Screen to Client-Ready Prototype in Figma
Scenario: A three-person in-house creative team at a mid-market B2B SaaS company. Responsible for all brand assets, landing pages, email design, and internal presentation materials. Turnaround time from written brief to first stakeholder draft currently runs 3–5 business days. The team is stretched, and the blank-screen problem — where every new project burns disproportionate time on structural scaffolding before any genuine creative judgment is applied — is the most commonly cited source of production bottleneck. Every designer on the team can articulate exactly how much time they spend building structure that isn’t yet creative, it’s just spatial and organizational.
Implementation: The team integrates AI-powered design assistance within their existing Figma workspace, applying the approach described by Kate Roberts of Cella by Randstad Digital. When a new brief arrives, instead of opening a blank canvas, the designer inputs the brief parameters — audience, format, brand guidelines, messaging hierarchy, desired primary action — and generates a set of wireframe layout options in minutes. The first-draft wireframe goes to the stakeholder for directional feedback before any pixel-perfect design work begins. Iteration happens at the structural level first, at the craft level second. This sequencing eliminates the most expensive kind of revision: late-stage redesign after a designer has already invested hours on visual execution in a direction the stakeholder didn’t actually want.
Expected Outcome: Time from brief to first stakeholder review drops from 3–5 business days to same-day or next-morning delivery. Stakeholder feedback quality improves significantly because they are reacting to actual spatial layouts and content hierarchy rather than imagining them from a written brief. The creative team reclaims roughly 40% of the time previously spent on structural scaffolding, reinvesting it in typography, motion, copy integration, and conceptual differentiation — the elements of design where human judgment provides the most value and AI provides the least. Net effect: faster delivery, higher creative quality, no headcount addition, and a stakeholder relationship built on iteration rather than guessing.
Use Case 3: Agentic Data Synthesis for Campaign Planning
Scenario: A B2C brand with a decade-plus of marketing data — campaign performance reports, customer surveys, brand health trackers, regional test results, post-campaign analysis decks — scattered across SharePoint folders, Dropbox directories, and legacy CRM exports. The marketing director spends 6–8 hours per quarter manually piecing together historical context before launching a new campaign. Institutional knowledge walks out the door every time a senior team member leaves. New hires start every planning cycle essentially from scratch.
Implementation: Applying the organizational knowledge framework described by MarTech in “You Could Already Have the Data AI Needs to Deliver Value”, the team builds an agentic knowledge layer that ingests all unstructured marketing data — PDFs, slide decks, call transcripts, spreadsheets, email summaries — into a retrieval-augmented generation (RAG) system. When the marketing director needs to answer “what messaging resonated with this audience segment in Q3 of previous years,” or “what did the last regional test conclude about pricing sensitivity,” the agent retrieves and synthesizes relevant source documents, returning a structured summary with source citations and confidence indicators. The system is queried in natural language by non-technical marketing staff — no data science expertise is required to operate it day-to-day. Initial setup requires connecting document repositories and configuring the retrieval scope, typically completed by a marketing ops lead over one to two weeks.
Expected Outcome: Pre-campaign research time drops from 6–8 hours per planning cycle to under 30 minutes. Historical learnings that previously existed only in the memory of long-tenured employees become accessible to new hires from their first week. Campaign strategies are informed by a broader evidence base — the full body of organizational intelligence rather than what one person remembers or has time to locate under a deadline. Repeated questions that MarTech identified as common pain points — “What messaging resonated previously? What emotional drivers matter most for this audience? What did prior regional launches teach us?” — are answered in seconds rather than through a multi-day archaeology project. This is precisely the stage two agentic synthesis Isaacson described: not AI generating new content, but AI activating and surfacing organizational intelligence that already exists.
Use Case 4: Cross-Functional Agent for Real-Time Project Status Management
Scenario: A marketing team of 20 distributed across creative, content, demand generation, and marketing operations. They run Jira for development-adjacent tickets, Workfront for creative project tracking, and Asana for campaign timelines. Status updates require manual cross-referencing across three platforms. Project managers spend 30–45 minutes every day pulling current status from each system and synthesizing it into a unified view. Deadlines slip because no single person has a real-time, cross-system visibility layer without manually assembling it. Handoffs between creative and technical functions — the highest-risk points in any campaign workflow — regularly stall because the two sides are working from different status pictures at different moments.
Implementation: Following the Jira and Workfront integration approach referenced by Kate Roberts at the MarTech Conference, the team deploys an agent that monitors all three project management systems via API, cross-references due dates and task dependencies, flags emerging blockers in a shared Slack channel, and generates a daily morning briefing that pulls current status from all three platforms into a single consolidated view. Configuration requires API access credentials and approximately one week of prompt tuning to align terminology across systems — no custom software development required. A human project lead sets the monitoring rules and threshold conditions, and reviews the daily output with the ability to adjust agent behavior as the team’s workflow patterns become clearer.
Expected Outcome: Project manager time on manual status tracking drops by 30–45 minutes per day — roughly 10–15% of total working hours returned to higher-value planning, stakeholder communication, and coordination work. Blocker identification shifts from reactive (someone notices a deadline has already slipped) to proactive (the agent flags a dependency risk 48–72 hours before it becomes a miss). Creative and technical teams gain a shared, real-time status view that dramatically reduces friction at handoff points. Stakeholder communication becomes faster and more accurate because the project lead is working from a synthesized, always-current data picture rather than a manually assembled one that was already outdated by the time it was compiled.
Use Case 5: Protecting Human Touchpoints with AI-Backed Customer Intelligence
Scenario: A direct-to-consumer brand running email, SMS, and live chat customer support. Operational cost pressure has pushed leadership toward eliminating human agents from the tier-one chat queue. The customer experience team is concerned about the brand impact, particularly for subscription management and post-purchase issue resolution — categories where customers have historically been most sensitive about reaching a real person. They need data to push back effectively on the cost argument rather than just an intuitive objection.
Implementation: Rather than eliminating human support, the team uses AI to augment human agents — deploying an AI assistant that pulls purchase history, previous support interaction transcripts, subscription status, and product catalog data, surfacing it to the human agent before they type a single character in response. The agent pre-briefs the human: here is who this customer is, here is their issue history, here is what similar cases resolved to, here are the relevant policy provisions that apply. The human agent sees full context in seconds instead of several minutes of system navigation. AI handles tier-one FAQ routing and account lookup automation. Tier-two and emotionally complex interactions — returns, cancellations, complaints — remain human-led. This model maps directly to what Prophet’s 2026 AI-Powered Consumer Report found: 62% of consumers push back when human support is eliminated entirely, and 71% worry about AI inaccuracy in contexts precisely like shopping and customer service.
Expected Outcome: Resolution time for human-handled tickets decreases by 35–40% because agents are context-loaded before they respond. Customer satisfaction scores hold or improve relative to what a fully automated scenario would deliver, based on the consumer preference data. Operational cost reduction is smaller than full automation would achieve, but customer lifetime value and retention metrics justify the difference on a unit economics basis. Most importantly, the brand preserves the trust signal that is becoming increasingly differentiated as competitors race toward full automation and discover the trust erosion that follows.
The Bigger Picture
The May 2026 MarTech Conference panel lands at a precise inflection point in the AI adoption curve for marketing organizations. The AI tool era — roughly 2023 through 2025 — was characterized by individual adoption: a marketer here picking up Claude or ChatGPT for copy drafts, a designer there experimenting with generative image tools, a developer spinning up a no-code automation workflow. These experiments produced real productivity gains for individuals. But they were fundamentally individualistic in their impact. They improved personal task completion speed without touching organizational architecture, throughput, or the structural handoff problems between creative and technical functions.
What the panelists at the May MarTech Conference are describing is the transition to organizational AI — where the unit of adoption is the team or the firm, the unit of value is redesigned workflow architecture rather than individual time savings, and the competitive question is not “which tools does your team use” but “how deeply have those tools changed how your organization actually produces work.” This transition is qualitatively harder than tool adoption, which is the honest explanation for why 46% of the conference audience remained in stage one. It is not because the tools haven’t advanced to make stage two accessible. It is because organizational change requires leadership alignment, cross-functional coordination, and willingness to challenge established process structures that feel comfortable precisely because they’re familiar. Tool adoption requires a credit card and a browser tab. Organizational AI adoption requires a change management strategy.
The consumer sentiment data from Prophet’s 2026 report provides the commercial logic for why the internal workflow focus is the strategically correct priority. As AI adoption becomes genuinely mainstream — 73% consumer usage means AI tools are now approximately as ubiquitous as smartphones — the novelty advantage evaporates. Consumers who use AI daily to draft their own content and summarize their own information are not impressed when brands deploy a clumsy customer-facing chatbot to deflect support inquiries. They recognize it immediately. They push back. And they attribute it to a brand values decision, not just a technology limitation. The market entering what Gartner calls the trough of disillusionment means trust is the new differentiator, not capability.
The organizational data dimension adds the long-term compounding argument. As MarTech noted, marketing teams generate extraordinary volumes of unstructured organizational intelligence — and it sits fragmented and inaccessible. The teams building agentic synthesis capabilities now are simultaneously building an institutional memory that compounds in value over time. Every campaign result, every customer insight, every regional test becomes queryable context that makes the next decision better informed. That is not a productivity tool — it is a strategic capability that grows more valuable as the data layer expands, and that cannot be replicated quickly by a competitor who decides to start building it a year from now.
What Smart Marketers Should Do Now
1. Conduct an honest AI maturity audit before you buy another tool subscription.
If you haven’t sat down with your team leads and mapped what percentage of your recurring workflows have been genuinely restructured — not just assisted with AI — you are almost certainly overestimating your organizational stage. Schedule a 90-minute working session with the people who run the actual work: the creative director, the marketing ops lead, the paid media manager. For each of the top 15 recurring workflow tasks, be explicit about which category applies: Is AI an optional add-on that some people use sometimes (stage one), or has the task been redesigned so AI handles retrieval, drafting, or analysis and humans handle review and judgment (stage two)? That map will tell you where you actually are. Most teams discover they are a full stage behind where leadership believes they are — and that gap is the planning gap that determines how far behind you are compounding in competitive capability each month.
2. Standardize AI capability across creative and technical functions simultaneously, not sequentially.
The Walk West mandatory certification model, as shared by Greg Boone at the MarTech Conference, solves a problem most organizations don’t name explicitly: AI capability fragmentation. When technical teams can build and configure agents but creative teams cannot prompt or direct them, you replicate the same siloed dysfunction that plagued MarTech stacks for a decade — tools that only one function controls, adoption that depends on developer bandwidth, creative teams that are passive beneficiaries of AI rather than active operators of it. A shared certification baseline does not require every designer to become a developer. It requires everyone to understand what agents can and cannot do well enough to participate in workflow design conversations, configure basic implementations themselves, and flag when an agent output requires human judgment rather than rubber-stamping.
3. Run a formal transformation challenge to systematically identify and eliminate low-value recurring work.
This is not a suggestion to hold an open brainstorm. A transformation challenge, as Boone described it for Walk West, is a structured organizational process: each team formally maps their highest-frequency, lowest-value recurring tasks over a defined two-week period, builds and pilots an AI-assisted or agentic alternative, and presents documented results — what was eliminated, what time was recovered, what worked and what required adjustment — in a cross-functional share-out. The cross-functional sharing component is what separates this from an individual productivity exercise. Creative teams discover what technical teams have already automated. Account managers discover what operations has offloaded. Analytical teams discover what the content team is generating at a fraction of the previous effort. This creates an organizational feedback loop that’s faster and more applied than any structured training program, and people who discover their own efficiency gains become advocates rather than resisters.
4. Build your organizational knowledge layer before adding more AI generation tools on top.
The most underinvested capability in AI-forward marketing organizations is structured retrieval of existing intelligence. Before investing in the next-generation AI production tool, make an honest assessment of what happens when someone on your team needs to know what messaging worked for a given audience segment 18 months ago, or what a regional market test concluded about pricing sensitivity, or what the last three brand health trackers showed about purchase intent driver shifts. If the answer is “they ask the person who’s been here the longest and hope they remember,” you have a foundational problem that no AI generation tool will solve. As MarTech reported, the challenge is making “organizational knowledge accessible when decisions happen.” Invest in auditing where your campaign intelligence, customer research, and historical performance data actually lives, consolidating it into a retrieval-ready architecture, and building the knowledge layer that agentic systems can actually query at decision time. This is a one-time infrastructure investment with returns that compound every quarter as the data layer grows.
5. Use consumer trust data to draw a deliberate, documented line about where AI belongs in your customer experience.
The numbers from Prophet’s 2026 report are specific enough to build a policy around: 71% of consumers worry about AI inaccuracies in shopping, research, and customer service contexts — and 62% push back when human support is eliminated entirely. Before deploying AI at any customer-facing touchpoint, run a specific and honest test: does this deployment make the customer’s experience objectively better, or does it primarily reduce our operational cost? If the honest answer is “mostly cost reduction,” find a way to use AI as an augmentation layer for human agents rather than a replacement layer. The short-term savings from eliminating human touchpoints appear on the P&L quickly and look compelling in quarterly reviews. The longer-term cost in brand trust, customer lifetime value, and the accumulated friction of customers who feel underserved shows up much later — and is significantly larger than the efficiency gain that created it.
What to Watch Next
Agentic capabilities deepening inside creative platforms. Roberts’ Figma wireframing example previews what is coming across every major creative platform. Watch Adobe, Canva, and Figma to extend agentic capabilities beyond AI-assisted generation — toward agents that receive a brief, generate draft layouts, iterate based on stakeholder feedback, and route for approval within a single integrated environment. When this becomes standard platform functionality rather than an advanced integration project, it will collapse the distinction between project management and creative production for teams that have already restructured their workflows. Teams in stage one will need months to adapt; teams in stage two will adopt in hours.
Mandatory AI skills frameworks becoming client requirements in agency RFPs. Walk West’s certification model is currently a competitive differentiator in agency pitches and a recruiting signal. By Q4 2026, watch for it to become a standard requirement in enterprise RFPs to agencies, particularly in regulated industries where AI governance documentation is increasingly scrutinized. Marketing certification bodies including the American Marketing Association will likely begin formalizing AI literacy frameworks with recognized credentials within the next 12 months. Agencies that have already run internal certification programs will meet this requirement from a position of demonstrated practice.
Organizational data infrastructure emerging as a hard-to-replicate competitive moat. The RAG-enabled organizational knowledge layer is still early enough that building it now creates a meaningful first-mover data advantage. Over the next 12 months, watch for enterprise marketing platforms — particularly those with existing CRM integration depth — to begin offering commoditized “organizational memory” features. Salesforce, HubSpot, and Adobe Experience Cloud are the most likely vectors for this. Teams that build the knowledge infrastructure before vendors package it as a subscription feature will hold a 12–18 month data quality and coverage advantage that cannot be closed by a competitor purchasing the same platform feature at launch.
Consumer AI literacy accelerating expectations around brand authenticity. The Prophet 2026 data point that 73% of consumers are now active generative AI users has an underappreciated downstream implication: consumers who use these tools daily are increasingly able to identify AI-generated content and AI-managed interactions — and they form strong, often negative opinions about what that signals about a brand’s values and investment in the customer relationship. Track consumer sentiment data quarterly, specifically “trust in brand AI interactions” and “perceived authenticity” metrics. This will move faster than most brand managers are currently planning for.
Cross-functional team structures evolving around agent-first project models. The current marketing org chart — separate creative, technical, and marketing ops functions managed hierarchically — was designed for human-executed work. As agents begin handling cross-functional tasks, expect organizational design experimentation. The most forward-thinking agencies and in-house teams will pilot “agent-first” project structures where a human creative lead and a human data lead jointly oversee a set of agents, rather than managing large specialist teams doing execution work. Early experiments in this model will appear in progressive agencies and SaaS marketing organizations in Q3–Q4 2026, and the results will inform how organizational design conversations evolve into 2027.
Bottom Line
The May 2026 MarTech Conference panel made one thing unambiguous: the organizations still collecting AI tools are falling behind the organizations that have started building agentic AI workflows that change how work gets done rather than just how fast individual tasks are completed. With 46% of marketing practitioners still in stage one — using AI for content generation without touching their underlying workflow architecture — the competitive window for teams willing to move to stage two is wide open right now. The consumer data from Prophet’s 2026 report adds the commercial imperative: consumers are adopting AI faster than they are trusting it, and the brands that will earn loyalty are those that use AI to make their human interactions more informed and more responsive, not to remove humans from the equation to cut costs. Build the organizational knowledge layer. Standardize AI capability across both creative and technical functions. Run the transformation challenges that surface what low-value work to eliminate. Stop chasing tools. The teams that do this in 2026 will have workflow capability and institutional intelligence that competitors cannot replicate by buying a new subscription in 2027.
0 Comments