12/4/2025: Why 88% AI Adoption Actually Means You’re Already Behind
Once again it is TIME for the GTM AI Podcast and newsletter sponsored by the GTM AI Academy and AI Business Network.
This week we have a lot to get into and an amazing podcast guest with Ken Roden who is doing research on AI adoption where we dig in.
And as per usual, I have given all resources, research and info into a NotebookLM that you can access if you so desire with assets preloaded.
Now to the podcast!
You can go to Youtube, Apple, Spotify as well as a whole other host of locations to hear the podcast or see the video interview.
Why 93% of Your Team Uses AI But You Think It’s 30%
Your AI pilots aren’t failing because of technology - they’re failing because your team doesn’t trust your vision
GTM AI Podcast with Ken Roden
https://www.futurecraftai.media - Kens podcast
Connect with Ken on Linkedin: https://www.linkedin.com/in/kenroden/
The Leadership Blind Spot That’s Killing Your AI Strategy
If you’re a GTM leader who thinks roughly 30% of your team is using AI, I have uncomfortable news: 93% of white-collar professionals are already using it. That’s not a typo. That’s the finding from Ken Roden’s doctoral research at Temple University, surveying 200 professionals with statistically significant results (R > 0.79).
The gap between what’s actually happening and what leadership perceives is now the single biggest barrier to AI execution. And it gets worse.
The Real Reason Your AI Pilots Are Failing
Every headline screams that 95% of AI pilots are failing. MIT published research. Consultants are writing case studies. Everyone assumes the problem is employee resistance, inadequate technology, or change management failures.
They’re all wrong.
Ken’s research reveals the actual failure point: employees don’t trust their leadership’s vision for how AI will be implemented. It’s not that people won’t use AI - they’re already using it extensively. It’s that they don’t believe leadership understands what they’re doing or has a coherent strategy for scaling it.
Think about what that means. Your team is running shadow AI operations right now. They’re using ChatGPT, Claude, and dozens of other tools to do their jobs better. But when you announce your official AI initiative, they don’t trust it enough to adopt it at scale.
Key Quotes That Reveal the Pattern
On the confidence-competence gap:
“There was definitely a correlation between people who said they use AI regularly and them saying that I am confident in my abilities to use AI. And I would say that’s dangerous. Because what, to your point exactly, you might think you’re good at this, but you’re actually maybe not as good as you think.”
On what’s actually working:
“The stuff that works, the people have the most success with, it’s the most boring stuff. It’s how do we get data from our Slack channel about customer insights into Salesforce... One of the most interesting use cases I saw... saved 20 hours a week per rep.”
On job security concerns:
“When I started this, I thought the reason why AI adoption was gonna struggle was because of job insecurity. I really did think that... but the data actually shows that’s not what the issue is.”
On the AI sandwich model:
“They’re taking what I’m calling the AI sandwich model, which is layering human intelligence at the start and the end with AI workflows spread out throughout the process... it brings the team along for the journey.”
Major Themes and Strategic Implications
Theme 1: The Perception Gap Is Your Competitive Vulnerability
Leaders thinking 30% adoption when it’s actually 93% means you have no visibility into what your team is actually doing with AI. You can’t optimize what you can’t see. You can’t scale what you don’t understand. And you can’t build competitive advantages on shadow operations.
The organizations that will win in 2026 are the ones closing this perception gap right now - understanding actual AI usage patterns, bringing those workflows into strategic focus, and building deliberate execution frameworks around what’s already working.
Theme 2: Trust Architecture Beats Technology Every Time
Ken’s research shows that technical capability isn’t the blocker. Your team already knows how to use AI. The blocker is trust - specifically, trust in leadership’s understanding of the technology and vision for implementation.
Building trust architecture means:
Involving frontline employees in AI workflow design (Swedish research shows this dramatically improves adoption)
Setting realistic goals (10% improvement, not 500% pipeline increase in 3 months)
Being transparent about what you’re trying to achieve and why
Demonstrating that leadership actually understands the tools
Theme 3: Operational Boring Wins, Strategic Flashy Fails
The use cases that are actually delivering ROI are unglamorous: moving Slack insights into Salesforce, automating CRM data entry, synthesizing customer research from fragmented sources. Ken’s example of a customer success team saving 20 hours per week per rep on account research wasn’t sexy - but it was measurable, immediate, and scalable.
Meanwhile, the ambitious marketing campaigns and sophisticated outreach cadences are stuck in pilot purgatory. The pattern is clear: nail the operational fundamentals before chasing strategic moonshots.
Theme 4: The Confidence-Competence Gap Is Creating False Experts
Just because someone uses AI daily doesn’t mean they’re good at it. Ken’s research found a dangerous correlation: frequency of use directly correlates with confidence, but confidence doesn’t correlate with actual competence.
This creates a workforce that thinks they’re AI experts when they’re actually running two-sentence prompts. For GTM leaders, this means you need to establish quality bars, provide real training, and calibrate expectations around what “good” actually looks like.
Theme 5: AI Fatigue Is Real and Coming for Your Team
Ken identified emerging research on a new type of cognitive fatigue from intensive AI use. Unlike Zoom fatigue (which is about passive consumption), AI fatigue comes from the intense cognitive load of having deep, multi-threaded conversations with AI systems for extended periods.
Teams implementing AI without addressing this will hit productivity walls despite having better tools. The solution isn’t less AI - it’s better workflow design that balances AI augmentation with human recovery time.
Theme 6: GPT-5’s “Gronk” Problem Signals a Market Shift
The conversation revealed something critical about the frontier model landscape: OpenAI is optimizing GPT-5 for general consumer audiences, making it less useful for sophisticated business users. Ken and Coach both noted weaker outputs, less control, and a “black box” problem with GPT-5’s agent capabilities.
For GTM teams, this means diversifying your AI stack. Claude for coding and complex reasoning. Gemini for multimodal and UI generation. Domain-specific tools for operational workflows. The era of “just use ChatGPT for everything” is over.
The AI Sandwich Model: Your Practical Starting Point
Ken’s recommendation for organizations starting their AI journey is the “AI sandwich model” - human intelligence at the beginning and end of processes, with AI workflows handling the middle execution steps.
This approach works because it:
Keeps humans in strategic decision-making roles
Allows teams to get comfortable with AI gradually
Doesn’t require blowing up existing processes
Creates natural checkpoints for quality control
Makes it easier to identify what’s working and iterate
For a GTM team, this might look like:
Human: Sales rep identifies target account and defines research objectives
AI: Agent conducts deep research, synthesizes data from multiple sources, identifies key stakeholders, maps org structure
Human: Rep reviews insights, refines approach, crafts personalized outreach strategy
The sandwich model isn’t the end state - it’s the bridge from pilot to production that most organizations are missing.
What This Means for Your 2026 Planning
Three actions emerge from this research:
First, audit actual AI usage across your team. Not what you think they’re doing - what they’re actually doing. Anonymous surveys, one-on-ones, direct observation. You need to close the 93% vs 30% perception gap before you can build effective strategy.
Second, build trust before you build technology. Involve frontline employees in workflow design. Set achievable goals. Be transparent about your vision. Remember: your AI strategy will fail not because the technology doesn’t work, but because your team doesn’t trust your implementation approach.
Third, start with operational boring. Find the 20-hour-per-week-per-rep time sinks in your organization. Automate data entry. Connect siloed information sources. Eliminate repetitive research tasks. These aren’t exciting, but they’re what actually scales and delivers measurable ROI in quarters, not years.
The gap between AI experimenters and AI executors is widening. This research shows exactly why - and exactly what to do about it.
Why 88% AI Adoption Actually Means You’re Already Behind
Sources & Data
This analysis draws from:
McKinsey Global AI Survey (2025)
TST Technology AI Infrastructure Reports
Andreessen Horowitz GTM Research
AI Revolution Policy Lab
Relevance AI Research
ArXiv AI Architecture Papers
The Uncomfortable Truth About AI Adoption
Here’s the number that should terrify you: 88% of companies are now using AI in at least one function. That’s up from 78% last year. Sounds like progress, right?
Wrong.
Only one-third of those organizations have moved beyond pilots to actual scaled deployment. The rest are trapped in what I call “pilot purgatory” - running small experiments, generating impressive demos for leadership, and accomplishing exactly nothing that moves the revenue needle.
The spread between AI experimenters and AI executors is widening into a chasm. And if you’re reading this newsletter hoping to find the magic prompt that will change everything, you’ve already misunderstood the game.
The Infrastructure War You’re Not Watching
While you’ve been debating whether to upgrade from GPT-4 to GPT-5.1, the trillion-dollar chess match that actually determines your future competitive landscape just shifted dramatically.
AWS and OpenAI announced a $38 billion partnership on November 29th, fundamentally restructuring the AI infrastructure stack. This isn’t just a cloud deal - it’s OpenAI diversifying away from single-vendor dependence while AWS positions itself as the backbone for enterprise AI at scale. Separately, AWS committed $50 billion to government AI capacity, adding 1.3 gigawatts of secure compute infrastructure.
Two days earlier, Microsoft and NVIDIA invested $15 billion into Anthropic, catapulting Claude’s maker to a $350 billion valuation. The strategic play here matters more than the number: Anthropic agreed to spend $30 billion on Azure capacity, and Claude is now the only frontier model available across all three major clouds - Amazon, Google, and Microsoft.
Let me connect the dots for you. The companies building your AI tools are spending tens of billions to ensure compute availability, redundancy, and cross-platform flexibility. Meanwhile, Oracle and OpenAI are constructing nearly 1 gigawatt of new AI data center capacity in Wisconsin - four interconnected facilities scheduled for 2028 completion.
What does this mean for your GTM strategy? The infrastructure capacity to support AI-first business models is being laid right now. The compute bottleneck that has constrained AI scaling for the past two years is being systematically eliminated. By 2028, the excuse “we couldn’t scale because of infrastructure limitations” will be exposed as exactly what it is - an excuse.
Model Wars: The Performance Gap Is Closing, The Capability Gap Is Exploding
Three major model releases landed within 48 hours of each other, and the pattern reveals something critical about where this market is heading.
OpenAI’s GPT-5.1 launched with two variants: “Instant” for quick interactions and “Thinking” for complex reasoning tasks. The model dynamically adjusts processing time based on task difficulty - spending 30 seconds thinking through a complex contract analysis but responding instantly to simple queries. This isn’t just faster AI. It’s AI that understands context well enough to allocate its own computational resources.
Google’s Gemini 3 became the first model to break a key industry benchmark threshold for advanced reasoning, but that’s not why it matters. Gemini 3 introduced generative UI - the ability to autonomously create entire web or app experiences tailored to user needs instead of just answering prompts. Think about what that means for your product demos, your sales collateral, your customer onboarding. The AI isn’t just generating content anymore. It’s generating entire functional experiences.
Anthropic’s Claude Opus 4.5 can write and execute its own code autonomously for 20+ minutes, outperforming both OpenAI and Google on complex coding and problem-solving evaluations. This model isn’t waiting for human instructions. It’s pursuing multi-step objectives independently.
Here’s the pattern that matters: We’ve moved from “AI that answers questions” to “AI that performs extended work.” Every major model release is pushing toward autonomous agent capabilities that can handle complex, multi-step workflows with minimal human oversight.
For GTM teams, this means the productivity ceiling just lifted dramatically. But - and this is crucial - only for teams that have already figured out how to integrate AI into actual workflows at scale.
The 3-Month ROI Mandate Is Killing Traditional Sales Cycles
The most significant shift in enterprise buying behavior isn’t coming from technology advancement. It’s coming from board-level impatience.
Enterprise buyers now expect positive ROI within three months of purchase. Not six months. Not a year. Ninety days. 57% of AI software buyers have this expectation, with 11% demanding near-immediate returns.
This has obliterated the traditional enterprise sales playbook. The 6-month pilot program? Dead. The 90-day proof of concept before discussing production deployment? Dead. The multi-quarter stakeholder alignment process? Dead.
70% of AI software buyers now cite speed of deployment as a top factor in vendor selection. They’re not evaluating your roadmap. They’re evaluating how fast they can show their CEO that the AI initiative is working.
This creates a brutal dynamic: Companies that built their GTM motion around careful, methodical enterprise sales processes are losing deals to competitors who can demonstrate immediate value. The vendors winning right now aren’t the ones with the most sophisticated technology. They’re the ones who can show a working solution in the buyer’s actual environment within the first week.
I’ve watched this play out in dozens of enterprise deals over the past quarter. The vendor that brings a live demo using the prospect’s real data - even if it’s imperfect - beats the vendor with the polished deck and the comprehensive roadmap every single time.
Why Your Pilots Are Failing (And What Actually Works)
Here’s the uncomfortable pattern in the adoption data: 64% of companies credit AI with boosting innovation, but only 39% have seen even minor profit improvement at the enterprise level.
Translation: Everyone is building cool AI projects. Almost nobody is making more money.
The gap exists because most organizations are approaching AI as a technology deployment instead of a workflow transformation. They’re adding AI capabilities to existing processes rather than redesigning processes around what AI makes possible.
The organizations seeing actual financial returns share three characteristics that separate them from the pilot-purgatory crowd:
First, they started with business outcomes, not technology capabilities. Instead of asking “what can we do with GPT-5?” they asked “what business constraint would we eliminate if we could process 1,000x more information in real-time?” Then they built backward to the AI implementation.
Second, they embedded AI deeply into core workflows, not as bolt-on tools. The difference between an AI assistant that sales reps can optionally use and an AI system that automatically enriches every deal in the pipeline before the rep even looks at it is the difference between 12% adoption and 94% adoption.
Third, they measured aggressively and killed failed experiments fast. The high-performing organizations in McKinsey’s survey run more AI experiments and shut down more AI experiments than average performers. They’re not more successful because they pick better projects. They’re more successful because they identify failed projects faster and reallocate resources.
The Rise of AI Agents: 62% Are Experimenting, 23% Are Scaling
The term “AI agent” gets thrown around carelessly, so let’s be precise about what we’re seeing in the data. 62% of organizations are experimenting with autonomous AI agent systems, but only 23% have any agent use at production scale.
An AI agent, in this context, is a system that can perform multi-step tasks with minimal human intervention - breaking down objectives, using tools, making decisions, and adapting based on results. This is fundamentally different from a chatbot that answers questions or a model that generates content on demand.
The early production deployments concentrate in three areas: IT service desks, knowledge management systems, and customer support workflows. These are environments where the agent operates within defined boundaries, the cost of mistakes is manageable, and success metrics are clear.
What’s not scaling yet - and this matters for GTM teams - are agents in high-stakes decision-making roles. Agents that qualify leads, negotiate contracts, or make pricing decisions exist in pilot form across dozens of organizations. But the gap between pilot and production in these areas is massive.
The blocker isn’t technology. The models are capable enough. The blocker is trust architecture - the systems, processes, and guardrails that allow humans to confidently delegate consequential decisions to autonomous systems.
Organizations figuring out trust architecture right now are building what will become insurmountable competitive advantages in 2026-2027. Because once you can confidently let an AI agent handle the first three qualification calls with every inbound lead, you’ve just multiplied your sales capacity by 10x without hiring anyone.
The New GTM Playbook: Show, Prove, Scale
If you’re still leading sales conversations with capability presentations and roadmap discussions, you’re living in 2023. The enterprise buying process has compressed into a new three-stage model that looks nothing like traditional enterprise sales.
Stage One: Show (Week 1) - Live demonstration in the prospect’s actual environment, using their real data, solving their specific problem. Not a sandbox. Not a hypothetical. The actual system doing actual work. The goal isn’t perfection. The goal is proof that this isn’t vaporware.
Stage Two: Prove (Weeks 2-8) - Limited production deployment in a contained use case with clear success metrics. This isn’t a pilot to evaluate whether the technology works. It’s a fast-path to ROI that proves the business case. Most successful implementations now target one specific workflow with one specific team and measure obsessively.
Stage Three: Scale (Weeks 9-16) - Expansion across teams, use cases, and workflows based on demonstrated results from Stage Two. This is where traditional enterprise deployments used to start. Now it’s where they finish.
The companies executing this playbook are closing enterprise deals in 10-12 weeks that used to take 9-12 months. They’re doing it by removing uncertainty at every stage and by structuring deals around outcomes rather than capabilities.
Outcome-based pricing is becoming table stakes. When you charge based on achieved metrics - hours saved, leads generated, tickets resolved - rather than per-seat licenses, you remove the buyer’s risk and align your incentives with their success. The vendors making this shift are accelerating deals and expanding faster.
What The Research Actually Reveals About AGI
Two research developments this week deserve attention because they expose something important about where we’re headed.
Socratic maieutic prompting - a technique that structures AI interactions as guided Q&A dialogues rather than direct questions - is showing meaningful improvements in logical consistency and reasoning quality. The approach forces the model to articulate and evaluate its chain of thought recursively, similar to how a human tutor would probe student understanding.
This matters for GTM applications because it suggests we’re still dramatically underutilizing current model capabilities. Most teams are using frontier AI models as fancy autocomplete. The organizations extracting real value are investing in prompt engineering, workflow design, and systematic testing to push these models to their actual performance ceiling.
Meanwhile, a new paper challenges the foundational assumption driving most AI investment: that scaling current architectures will lead to artificial general intelligence. The researchers argue that today’s neural networks, regardless of size, function as sophisticated pattern matchers that lack the dynamic structure required for true reasoning.
I’m not here to debate AGI timelines. But this perspective should inform your strategic planning. The AI capabilities available today and over the next 2-3 years are extraordinary - but they operate within specific constraints. Building your business strategy around the assumption that GPT-7 or Claude 6 will suddenly achieve human-level reasoning across all domains is probably unwise.
Build for the AI that exists and the AI that’s clearly emerging. That’s more than enough to create massive competitive advantages.
The Pattern That Should Change Your 2026 Planning
Strip away the noise and three statistical realities define the competitive landscape:
Reality One: 88% of companies are using AI, but only 33% are scaling it. The spread between experimenters and executors is the new competitive moat. If you haven’t moved from pilot to production in at least one core workflow, you’re falling behind regardless of how many AI tools your team has access to.
Reality Two: 57% of enterprise buyers expect 3-month ROI, and 70% prioritize deployment speed in vendor selection. Your sales cycle needs to deliver proof of value in weeks, not quarters. If your current GTM motion can’t do this, your competitors’ motion will.
Reality Three: The infrastructure to support AI-first business models is being built right now with unprecedented capital deployment. The constraint isn’t compute availability anymore. The constraint is your ability to redesign workflows around AI capabilities.
The organizations that will dominate in 2026 aren’t the ones with the most sophisticated AI strategy. They’re the ones that figured out execution in 2025.





