12/10/25: AI Adoption Splits: Enterprise Dollars Rise While Competition Implodes Everything
Once again it is TIME for the GTM AI Podcast and newsletter sponsored by the GTM AI Academy and AI Business Network.
We have a lot to go through with updates and I think you will LOVE IT.
This week we have a lot to get into and an amazing podcast guest with Elliot Garreffa who is actively building custom AI agents and applications for businesses that create impact. A really good podcast where he shares screen and goes into detail which is AWESOME.
As always here is the NOTEBOOKLM already loaded with Audio overview, video, slides, etc. enjoy!
Now to the podcast!
You can go to Youtube, Apple, Spotify as well as a whole other host of locations to hear the podcast or see the video interview.
How to Actually Implement AI That Works (Not Another Failed Pilot)
Summary
Ghost Team co-founder Elliot Garreffa breaks down why most companies struggle to see ROI from AI—and what to do about it. This conversation cuts through the hype to reveal the unglamorous truth: successful AI implementation isn’t about buying licenses or running proofs of concept. It’s about understanding your actual workflows, identifying where AI creates 10x (not 2x) improvements, and building systems that your teams will actually use. Elliot shares real examples of SEO systems that compress months of agency work into minutes, and explains why human-in-the-loop isn’t a compromise—it’s best practice.
Three Key Topics
1. The Real Reason Your AI Initiatives Aren’t Working
Most companies blame the technology when AI underdelivers. The actual problem? They’re trying to automate existing processes instead of reimagining them entirely. Elliot explains that organizations often say “we tried ChatGPT for content and it wasn’t good enough” but that’s like judging a car’s potential by only using it in first gear. The breakthrough happens when you stop asking “how can AI make this faster?” and start asking “what process would we design if we had this capability from day one?”
2. The Proof-of-Value Framework That Gets Buy-In
Before writing a single line of code, Ghost Team spends the entire first phase talking to the actual teams who will use the system. They map current workflows, identify pain points, and build a business case that shows dummy data and projected ROI. This approach solves the classic problem where executives think they know the issue (salespeople need better discovery skills) but the data reveals something different (you’re targeting the wrong prospects entirely). The framework ensures you’re solving root causes, not symptoms.
3. Why Autonomy Without Guardrails Fails
Fully autonomous AI systems are still “quite some way” from being reliable across the board. Elliot recommends rigid workflows over flexible agentic systems for most use cases—and always includes human-in-the-loop checkpoints. For SEO content generation, that means one review after research and another before publishing. The counterintuitive insight: these checkpoints don’t slow you down when you’re generating 10 blogs automatically. They make the difference between content that converts and content that feels obviously AI-generated.
Key Quotes
“If you just go and prompt an LLM and try to create some content, it might make incremental improvements, but it’s actually not that good. Where we come in is building systems that create 10x, 100x improvements.”
“You have to really understand the problem before you tackle it. Any good technology implementation does that step upfront, and we’ve found it incredibly true for the AI space.”
“Don’t think about using AI to do this part of this process. You can just do an entirely different process. That gets far better results than just slapping things on top.”
“The kind of thing that would’ve taken months to get back from an agency—we’re doing it directly within the chat window. That obviously changes a lot in terms of a typical process.”
“When you add that human touch, it is significantly better. Regardless of how much training data you have, you can still really tell whether something has been AI-generated or not.”
“One of the best things you can do when first getting started: look at things you’re spending a huge amount of time on. Focus on automating those first. They save you time, which means you can focus on more valuable tasks.”
“People see these workflow automations on LinkedIn and they want them. But whether these systems work is all about the detail under the hood—the prompting, the training data, the customization to your brand.”
Ready to Stop Spinning Your Wheels on AI?
This conversation goes deep on MCPs, context engineering, and the technical stack that actually delivers results. Listen to the full episode to hear Elliot walk through a live demo of automated SEO research and strategy that would take traditional agencies weeks to produce and learn why starting with Lindy or n8n beats jumping straight to building custom SDR systems.
AI Adoption Splits: Enterprise Dollars Rise While Competition Implodes Everything
The AI market just entered a new phase. While enterprise budgets surge to record highs and 82% of leaders now use Gen AI weekly, the foundation beneath that growth is cracking. OpenAI declared “Code Red” as Google’s Gemini 3 surges past 650 million users. Anthropic is running surveillance interviews revealing massive trust gaps. And new data shows large companies are quietly walking back AI adoption even as they spend more. The consensus is breaking. The consolidation is starting. And the window to position your AI strategy is closing fast.
Summary Overview
Three things are happening simultaneously. First, enterprise AI spending is exploding at 88% of executives plan budget increases in 2026, with two-thirds investing over $5 million annually and nearly half of tech budgets now directed to AI. Second, the vendor market is collapsing into a competitive war that will reshape platform economics by 2027. OpenAI is burning capital at a rate that demands either $100 billion in new funding or aggressive revenue extraction, while Google just proved Gemini 3 can scale to 650 million users in months using existing ecosystem distribution. And third, despite spending increases, adoption is declining—large companies’ AI adoption rate dropped from 14% to 12% in late 2025, and 95% of AI pilots at large enterprises are failing.
The synthesis is clear: companies are spending more, choosing between fewer vendors, and failing to deliver results at scale. The margin for error in platform selection and AI capability building has collapsed.
One-Sentence Link Summaries
Anthropic Interviewer Study — Anthropic deployed Claude to conduct 1,250 interviews with professionals, revealing that while 86% report AI saves time, 69% hide their AI use due to workplace stigma, and scientists refuse to trust AI for core research despite wanting it.
Google Opal SEO Agent Demo — Julian Goldie showcases Google’s new no-code AI agent builder Opal, demonstrating how to construct a complete SEO system covering keyword research, technical audits, competitor analysis, and content generation without writing code.
McKinsey on AI-Capable Domain Leaders — McKinsey identifies the critical shortage of “domain owners” who combine business expertise with AI fluency, finding only 17% of Fortune 500 senior leaders have technical skills, yet these hybrid leaders are essential to delivering transformational ROI.
Tech Giants’ AI Debt Surge — Reuters and other outlets report hyperscalers issued nearly $90 billion in bonds since September 2025 to fund AI infrastructure, with capital expenditure projected to hit $600 billion by 2027, raising concerns about debt sustainability and market capacity.
OpenAI Enterprise Usage Growth — TechCrunch details OpenAI’s claim that ChatGPT Enterprise message volume grew 8x since November 2024, with workers saving 40-60 minutes daily, though these figures arrived days after Sam Altman’s internal “code red” memo about Google’s Gemini 3 threat.
Wharton 2025 AI Adoption Report — The third annual study from Wharton and GBK Collective shows 82% of enterprise leaders now use Gen AI weekly (up from 37% in 2023), 75% report positive ROI, and 88% plan budget increases, yet training investment dropped 8 points and skill confidence fell 14 points.
AI Spending Debt Concerns — Multiple financial outlets document investor anxiety as tech companies shift from cash reserves to public debt markets, with Meta issuing $30 billion, Oracle $18 billion, and Amazon $15 billion in bonds to fund AI data centers.
Fortune on AI Adoption Decline — Census Bureau data reveals AI adoption among large companies (250+ employees) declined from 14% peak to 12% by late summer 2025, following MIT research showing 95% of generative AI pilots at large companies were failing.
OpenAI vs. Google Competition — The Verge and others report OpenAI CEO Sam Altman declared internal “code red” after Google’s Gemini 3 outperformed ChatGPT on key benchmarks, accelerating the planned new reasoning model release to beat Gemini while preparing for talent defections to competitors.
Perplexity AI State of AI Vision — This source discusses broader AI market trends, bubble concerns, and the disconnect between AI hype and actual enterprise value realization as investment continues to surge despite implementation challenges.
Five Trends Reshaping Enterprise AI Strategy Right Now
1. The Great AI Skills Paradox Is Reaching a Breaking Point
Enterprise AI budgets are exploding while training investments crater—and this collision is creating the single largest execution risk in the market today.
The data is stark. The Wharton study shows 88% of executives now plan to increase Gen AI spending in 2026, with two-thirds of organizations investing over $5 million annually and roughly 30% of tech budgets redirected to internal AI R&D. Daily AI usage among enterprise leaders jumped 17 percentage points year-over-year to reach 46%, meaning AI integration is no longer aspirational—it is operational and spreading. OpenAI reports ChatGPT Enterprise message volume grew 8x since November 2024, and workers using the platform save 40-60 minutes daily, suggesting genuine productivity gains in early adopter companies.
Yet beneath this spending surge lies a capability collapse. Despite 43% of organizations acknowledging critical technical skill gaps, formal training investment dropped 8 percentage points, and confidence in training as the primary path to AI fluency plummeted 14 points. The consequence is measurable: 43% of leaders now report employees are losing hands-on proficiency as automation increases, creating what researchers call “skill atrophy”—the paradox where AI tools that promise to amplify human capability instead leave teams less capable of intervention when models fail.
McKinsey’s analysis reveals the core problem: enterprises lack “domain owners”—the N-2 and N-3 executives who combine deep business expertise with sufficient technical fluency to oversee AI-enabled transformation. Their research of Fortune 500 LinkedIn profiles found only 17% of senior leaders’ skill sets are technical, and just 5% held technical roles during their careers. Without these hybrid leaders who can reimagine end-to-end processes, speak credibly about data architecture, and oversee cross-functional tech delivery, AI investments generate pilots rather than productivity.
For GTM leaders, the implication is stark: you cannot buy transformation with vendor licensing alone. You have to build the people who can execute it. And the window to build that capability before your competitors do is closing rapidly.
2. The Trust Crisis: Why 95% of AI Pilots Fail and Adoption Is Declining
The market is beginning to revert from euphoria to reality. While budgets increase, adoption is declining—a dangerous divergence that reveals the gap between AI hype and operational truth.
U.S. Census Bureau data tracked through the Business Trends and Outlook Survey shows AI adoption among large firms (250+ employees) declined from a 14% peak earlier in 2025 to 12% by late summer. This reversal is striking: growth had been steep from just 3.7% in September 2023 to 9.2% in Q2 2025, suggesting many organizations rushed into adoption only to discover implementation realities don’t match vendor promises. MIT research that followed the decline identified the core problem: 95% of generative AI pilots at large companies were failing.
The reason became clear when Anthropic deployed Claude to conduct 1,250 interviews with professionals across creative, scientific, and general workforce categories. The results are damning. Scientists reported trust and reliability concerns as the primary barrier in 79% of interviews, with researchers stating plainly: “If I have to double check and confirm every single detail the agent is giving me to make sure there are no mistakes, that kind of defeats the purpose”. Creatives showed consistently low trust across all disciplines despite reporting productivity gains, while 69% of general workforce professionals mentioned social stigma around AI use—one fact-checker told the interviewer “I don’t tell anyone my process because I know how a lot of people feel about AI”.
The pattern is consistent across domains: AI can accelerate work, but the verification burden and reliability concerns prevent it from replacing human judgment, especially in high-stakes or complex domains. The real cost of “AI-assisted” work is the fact-checking overhead, domain expertise validation, and contextual review—tasks that often take as long as creating the content manually. That means the 40-60 minutes of daily productivity gains OpenAI reports are real, but they’re distributed across organizations in patterns that reveal which roles actually benefit from AI (routine tasks with low verification burden) and which roles absorb heavy friction (knowledge work requiring accuracy and judgment).
This creates a market divergence: companies with clear use cases and rigorous implementation (like OpenAI’s own customers) see productivity gains and higher adoption. Companies that treated AI as a general-purpose tool and expected broad transformation are quietly scaling back expectations while maintaining budget commitments. Hence: spending increases alongside adoption declines.
3. The First Real AI Platform War: Why OpenAI’s “Code Red” Changes Everything
For the first time since ChatGPT’s November 2022 launch, a credible competitor is threatening OpenAI’s monopoly on enterprise mindshare. Sam Altman’s internal “Code Red” memo signals not panic—OpenAI is still winning on raw engagement—but recognition that the era of default dominance is ending.
The competitive shift came in two moves. First, Google released Gemini 3 on November 18, 2025, with strong multimodal reasoning and code performance that outperformed ChatGPT on internal benchmarks. Second, Google deployed Gemini 3 “day one” across its entire ecosystem: Search, Workspace, Cloud, and more, reaching 650 million monthly users in weeks—a deployment velocity that leverages installed base distribution OpenAI cannot match.
That distribution advantage is not hypothetical. Gemini is now default in Google Search, integrated into Gmail and Calendar, embedded in Workspace, and native to millions of existing enterprise deployments. For organizations already using Google Cloud or Google Workspace—a massive cohort—Gemini adoption requires no new procurement, no new integrations, no new training infrastructure. It’s just there.
OpenAI still leads on raw engagement and brand. ChatGPT’s 800 million weekly users far exceed Gemini’s 650 million monthly users, and “ChatGPT” remains synonymous with AI in consumer minds. But brand alone doesn’t survive distribution-driven competition. Altman’s memo makes clear OpenAI is accelerating a new reasoning model to match Gemini 3’s performance, delaying ad plans to redirect resources, and preparing staff for “rough vibes” as the competitive pressure intensifies.
What matters for GTM leaders is structural: the single-vendor world is over. Over the next 18 months, expect a shift from “which platform?” to “which stack, where, and why?” Multi-vendor deployments will become standard. Procurement will widen. And vendor defensibility will increasingly depend on ecosystem integration, not just model quality. If your AI strategy still assumes a single default vendor, it is already obsolete.
4. The $600 Billion Bet: How AI Infrastructure Debt Could Reshape Tech Economics
Tech giants are making the largest debt-fueled infrastructure bet in corporate history, and the financial community is starting to worry that this bet may not pay off at the scale companies are projecting.
Since September 2025, four major hyperscalers issued nearly $90 billion in public bonds: Meta ($30 billion), Alphabet ($25 billion), Oracle ($18 billion), and Amazon ($15 billion). Add Meta’s $27 billion private credit facility with Blue Owl Capital, and total 2025 debt issuance from hyperscalers exceeds $120 billion—more than four times the $28 billion average over the previous five years. This represents a fundamental shift for Silicon Valley companies that traditionally funded infrastructure growth from operating cash flows rather than public debt.
The spending curve is accelerating. AI capital expenditure is projected to increase from $200 billion in 2024 to nearly $400 billion in 2025, reaching $600 billion by 2027. Morgan Stanley forecasts $2.9 trillion in cumulative AI spending between 2025 and 2028, with roughly half requiring external financing. These are not marginal investments—they are generational commitments of capital that will define competitive positions for the next five years.
The risk is not bankruptcy. Analysts estimate only 10-20% of future AI expenditures will require debt financing, with 80-90% still coming from operating cash flows. Goldman Sachs notes hyperscalers could accommodate $30-50 billion more in debt while keeping leverage below average A+ rated companies, suggesting balance sheets remain healthy. The real risk is whether AI generates sufficient returns to justify the spending curve. UBS estimates leading hyperscalers will shift from net cash positions to modest borrowing while maintaining leverage below 1x (total debt less than annual earnings), which is sustainable. But observers point to dot-com era warning signs—peaks in investment spending, rapid corporate debt rises, and eventual credit spread widening that preceded the 2001 collapse.
For GTM leaders, the implication is clear: vendor sustainability matters now. The companies aggressively pursuing market share at any cost—burning capital to acquire users or defend position—are taking on structural risk. Ask hard questions about your primary vendors’ funding runway, burn rate, and path to profitability. Companies like OpenAI (burning toward 2029 while needing $100 billion in fresh capital) are making high-stakes bets on revenue extraction. Companies like Google (with profitable core Search business funding AI) have more optionality. And companies like Anthropic (still in scaling mode but with large capital commitments from major backers) are in the middle. Your vendor strategy now needs a financial analyst lens, not just a product evaluation.
5. The Domain Leader Shortage: Why Technical Skills Must Enter the C-Suite (Or Your Competitors Will)
The bottleneck preventing AI transformation is not technology—it is leadership capability. And companies that solve this problem first will build durable competitive advantage against those that don’t.
McKinsey identifies “domain owners”—N-2 and N-3 executives who lead business lines or functions and can drive end-to-end change by combining traditional business muscle with tech fluency—as the single most critical role for AI transformation success. These leaders reimagine customer journeys with AI at the center, develop transformation road maps with sequenced use cases and clear KPIs, oversee cross-functional tech delivery teams, and own adoption and scaling rather than delegating to IT.
The problem is scale. Most large companies have 15-30 core business processes or customer journeys, each requiring a leader and team with the right functional mix. That means organizations need 75-150 leaders among their N-2 and N-3 population with this hybrid profile. Current bench strength is nowhere near adequate. Citizens Bank’s Adam Boyd exemplifies the model: he led home equity lending transformation that reduced customer wait times from 35+ days to just a few days by working side-by-side with technology, credit, risk, compliance, strategy, and finance leaders. Boyd didn’t just oversee development teams—he learned agile software delivery, understood the bank’s data architecture and technology stack, stayed involved to overcome roadblocks, and owned end-to-end change management.
Building this capability at scale requires deliberate action. First, clarify which business domains have the most AI transformation potential. Second, honestly assess whether current domain leaders have the necessary traits and skills—McKinsey notes companies commonly replace 20-30% of domain leaders during transformation efforts. Third, launch strategic upskilling programs that go beyond product management to cover process reengineering, AI models, data management, engineering talent assessment, and change management, with hands-on practice through consulting partnerships, two-in-a-box leadership models, or capstone projects. Finally, shift operating models to embed engineering talent in domain owner teams with persistent funding (roadmaps rather than individual projects) and aligned performance incentives.
For revenue leaders specifically, the implication is personal. AI fluency is no longer optional for executive advancement. The window to build that second muscle—to move from business domain expert to AI-enabled business leader—is narrow and closing. Companies that build domain leader capability faster will execute AI transformations faster, will capture more value from their AI investments, and will consolidate market position against slower competitors. The race for AI leadership is increasingly a race for AI-capable leaders.
What This Convergence Means: Three Actions for GTM Leaders
The AI market has moved from “should we adopt AI?” to “how do we win in an increasingly competitive AI-vendor landscape while building the transformation capabilities to actually deliver ROI?” The market is bifurcating into three tiers: companies that will lead (building domain capability, selecting resilient vendors, executing disciplined pilots), companies that will follow (copying winners, staying flexible on tooling, investing in skills after seeing proof points), and companies that will lag (treating AI as a one-time IT project, betting everything on a single vendor, cutting training when budgets tighten).
The timeline is compressed. OpenAI and Google are fighting for enterprise dominance right now. Anthropic is building serious credibility. Meta is emerging as a credible infrastructure and capability provider through Superintelligence Labs. Within 18 months, the platform-of-choice landscape will be largely set, and switching costs will be real. Within 36 months, winners and losers will be clear.
Three actions matter immediately:
First, audit your vendor and platform strategy. If you are still betting on a single default AI vendor, you are already behind. Map your key use cases across roles, assess which models (OpenAI, Gemini, Anthropic, open-weight) are actually best-suited to each use case, and build infrastructure that abstracts across multiple models rather than locks you in. Google and OpenAI’s competitive arms race will create opportunities to negotiate better terms, migrate workloads, and optimize spend over the next 12 months. Use that window.
Second, build domain leader capability before your competitors do. Identify your top 20-30 business leaders who could become AI-enabled domain owners. Assess honestly whether they have the tech fluency and learning agility to make the jump. For high-potential leaders, invest in targeted upskilling—not generic “AI awareness” training, but hands-on execution with actual AI tools, problem-solving workshops focused on your industry, and coaching through real transformation projects. For leaders who lack the trajectory or interest, accelerate replacement with candidates who combine domain expertise with technical or analytical strength. The companies that create 30-50 AI-fluent domain leaders in the next 24 months will own their markets. Those that don’t will watch competitors consolidate advantage.
Third, shift from pilot thinking to execution discipline. The era where companies launched AI pilots to “explore possibilities” is over. The 95% pilot failure rate reflects too much exploratory work and too little ruthless sequencing around use cases that have clear ROI, customer or employee value, and realistic integration timelines. Pick 3-5 high-impact use cases in GTM or operations. Staff them with domain leaders plus technical partners. Hold them to delivery schedules and measurable outcomes. Kill ruthlessly anything that doesn’t meet gates. And measure not just AI model performance, but human capability, adoption rates, and business impact. That discipline will separate winners from the long tail of companies that spent money without getting results.
The Bottom Line
The AI transformation wave is entering its hardest stage. The easy part—adopting tools and running pilots—is done. The hard part—building the people, processes, and organizational discipline to actually generate ROI at scale—is just starting. Companies that treat this as a technology problem will fail. Companies that treat it as a leadership and execution problem will win.
The vendors will compete fiercely. Google’s ecosystem advantage is real and growing. OpenAI’s technical lead and brand are still formidable but no longer insurmountable. The capital requirements are staggering. And the talent wars are heating up. For GTM leaders, that competition is actually good news—it creates optionality and forces vendors to compete on value rather than lock-in. But you have to move fast to capture that advantage.
The window is open. The clock is ticking. And the margin for error is gone.
What’s Next
GTM AI Academy helps revenue leaders and teams navigate exactly this moment: building AI strategy that aligns with your business model, developing the internal capability that makes AI stick, and executing with discipline to turn investment into measurable competitive advantage. Whether you’re scaling your AI team, building AI-native go-to-market workflows, or preparing your organization for the leadership transitions this transformation requires, we provide the frameworks, training, and expert guidance to move faster than your competitors.



