GTM AI Podcast 9/23/25: The AI Maturity Gap Is Now a Competitive Threat
Welcome to another week of amazingness and AI-high speed updates. Lots of goodies this week all sponsored by the GTM AI Academy and the AI Business Network. Each week we send out the podcast interviews from GTM, AI or Revenue leaders or founders to give us inside look into current tech and what is coming next.
This is in 2 sections, the first is the podcast and the second is a breakdown of articles, research, updates, or other AI news-worthy items to keep you up to speed.
I left all the articles and podcast in a NotebookLM you can hop over to and listen to the audio or chat with the content.
Also for you the bonus of the AI Readiness Interactive Assessment for GTM Leaders based on todays newsletter!
Lets get into it!
You can go to Youtube, Apple, Spotify as well as a whole other host of locations to hear the podcast or see the video interview.
GTM AI Podcast Review: Russell Mikowski on AI-Powered Psychometrics and Team Optimization
Summary
In this episode of the GTM AI podcast, host Jonathan Kvarfordt interviews Russell Mikowski, CEO of SurePeople, about revolutionizing workplace psychometrics through AI integration. The conversation reveals both compelling opportunities and critical considerations for GTM leaders evaluating personality assessment technologies for their organizations.
The Human-AI Collaboration Imperative
As AI automates increasing portions of business operations, Russell Mikowski presents a counterintuitive thesis: human interactions become more, not less, critical to organizational success. His company, SurePeople, positions itself at the intersection of traditional psychometric assessment and modern AI workflow integration, claiming to solve the chronic underutilization of personality data in business contexts.
Mikowski's unconventional background—DJ, poker player, music magazine editor turned sales leader—provides an interesting lens into how diverse experiences can inform approaches to human psychology and technology integration. His journey from making $22,000 as an associate editor in New York to leading a psychometric AI company illustrates the non-linear paths often found in the startup ecosystem.
Core Value Proposition Analysis
The Problem Statement
SurePeople addresses a genuine pain point in organizational development. Traditional psychometric assessments like DISC, Myers-Briggs, and StrengthsFinder suffer from what Mikowski calls the "desk drawer problem"—results are generated, discussed briefly, then abandoned without ongoing application. This represents a significant waste of investment in human intelligence gathering.
The company identifies three critical gaps in current approaches:
1. Limited organizational penetration: Assessments typically reach only leadership teams, not entire organizations
2. No workflow integration: Results exist in isolation from daily work tools
3. Lack of contextual application: No real-time guidance for specific situations like meetings or communications
The PRISM Solution
SurePeople's PRISM assessment measures six (or seven, including personality under pressure) construct areas, positioning itself as "the most robust psychometric assessment on the market." While this claim requires independent validation, the 30-minute completion time suggests comprehensive data gathering.
The strategic advantage lies not in the assessment itself, but in the integration layer. Tools like "SureMeeting" provide real-time personality insights during Zoom calls, offering suggestions on communication approaches based on participants' psychological profiles.
Powerful Quotes from Russell Mikowski
On the Problem with Traditional Assessments
"The results of those assessments are often metaphorically stuffed into our desk drawers, sometimes literally in Manila folders and kind of die there. Right? So sure people is looking to completely disrupt the traditional psychometric world by democratizing access to psychometrics across organizations."
On Meeting Efficiency
"Should they be leading with the why? Because the group leans towards Big picture thinking. And that's really valuable? Or should they get straight to the details and the data? Because this group doesn't care at all about how your kids are, or even why the company has made this decision. They just want to get the specifics that impact them."
On Culture Driving Performance
"Culture begets performance right. If people feel seen, heard, and understood, they're less likely to be flight risks, they're less likely to be quiet. Quitting, they're less likely to be negatively impacted by a slack message that they took the wrong way."
On the Future of Human Value in an AI World
"The expected output, and you know, in a related manner, the value of each human individual on teams is rising right? I think. Just naturally speaking, given those conditions. So the optimization of the interaction, the communication, the collaboration between more valuable than ever human resources."
On AI Personality Integration
"What if your personality, as determined by a psychometric, could be essentially fed into an agent to make the language that it uses sound more like you. And then there would be consistency, perhaps, between that initial automated outreach, and how you actually act when you get on a demo with someone."
On Personal Awareness and AI Assistance
"As a big picture thinker like now, my prism type is coming in. I'm bad with details. I'm horrible with lists. I'm horrible with making sure attention is appropriately given to minute things that need to be done. But where AI can stay on top of those for me make reminders and keep me informed... That's freeing up my time to think creatively."
On PRISM's Market Position
"Prism is the most accurate psychometric on the market today. So we're comfortable that our tool is the appropriate vehicle for powering interactions that matter in moments that matter on platforms that you already use."
Core Concepts & Frameworks
The "Desk Drawer Problem"
Traditional psychometric assessments suffer from a fundamental utilization issue - results are generated, briefly discussed, then abandoned without ongoing application or integration into daily workflows.
Democratized Psychometrics
Moving beyond leadership-only assessments to organization-wide personality intelligence that informs every interaction and collaboration.
Workflow-Integrated Intelligence
Embedding personality insights directly into tools people already use (Slack, Zoom, email) rather than requiring separate platforms or manual reference.
The Human-AI Value Multiplication
As AI handles routine tasks, human interactions become more critical and valuable, requiring optimization through better understanding of personality and communication preferences.
Precision vs. Power Personality Types
Data showing that detail-oriented "precise" personalities (architects, scientists, researchers) are more likely to adopt and frequently use AI-powered personality tools than "powerful" or "versatile" types.
Conclusion: The Human Advantage in an AI World
Russell Mikowski's central thesis—that human interactions become more valuable as AI handles routine tasks—presents a compelling framework for thinking about organizational investment priorities. As teams become smaller but more productive, the quality of human collaboration indeed becomes more critical to success.
SurePeople's approach to integrating psychometric insights into daily workflows addresses real pain points in organizational effectiveness. However, the success of such initiatives depends heavily on execution quality, cultural fit, and sustained organizational commitment to applying the insights generated.
For GTM leaders, the question isn't whether personality insights have value—they clearly do. The question is whether SurePeople's specific approach provides sufficient differentiation and integration quality to justify the investment and change management effort required.
The conversation suggests a mature understanding of both the opportunities and challenges in this space. Mikowski's background in sales leadership provides credibility when discussing practical applications, while his recognition of AI's limitations in human interaction shows thoughtful positioning.
As the AI revolution continues to reshape business operations, tools that enhance rather than replace human capabilities may indeed provide sustainable competitive advantages. SurePeople's bet on personality-informed collaboration represents one approach to maintaining the human edge in an increasingly automated world.
The ultimate test will be whether organizations can successfully integrate these insights into their culture and whether the promised efficiency gains materialize at scale. For now, the conversation provides a thoughtful framework for evaluating the role of enhanced human intelligence in modern GTM operations.
The AI Maturity Divide: Why Most GTM Orgs Are Falling Behind
AI is no longer optional, but the gap between early adopters and late movers has become a structural threat. Five major studies and article this last week reveal converging signals: while consumer usage is growing and executive optimism holds, trust is fractured, skills are lacking, and organizational readiness remains the exception. For GTM leaders, this is not about catching up—it’s about changing how you build, structure, and measure your teams and technology around AI. The winners are shifting from reactive experimentation to operational AI discipline.
This analysis distills findings from five sources:
Here are the direct hyperlinks to each of the five sources used in the newsletter:
1. **Anthropic CEO Interview – Business Insider**
Dario Amodei warns AI replacing jobs is necessary and coming soon
2. TDWI Best Practices Report Q1 2025 – ZoomInfo
Creating an AI-Ready Organization
3. CNET Survey – AI and Human Behavior Concerns
US Adults Worry AI Will Make Us Worse at Being Human
4. arXiv Preprint – LLM Behavioral Reversal Research
5. Pew Research – Public and Expert Views on AI (April 2025)
How the U.S. Public and AI Experts View Artificial Intelligence
What the Data Says
Each report surfaces critical points, but the signal gets louder when viewed together.
In the TDWI AI Readiness report, only 45% of enterprise respondents said they are “mature” or “very mature” in AI. Yet those who reach maturity report 25% efficiency gains and 20% cost savings, especially when deploying AI into real-time workflows and customer interactions. But the gap between intent and capability is wide. Generative AI is already in production at 26% of firms, but only 17% of organizations say they are truly “very mature.”
Meanwhile, the Pew Research survey finds that while AI experts are generally optimistic about AI’s long-term benefits, only 19% of U.S. adults believe AI will do more good than harm. Consumers remain cautious. In Deloitte’s KSA+UAE study, 58% of users said they’d avoid customer service powered by generative AI, and 54% said they would trust AI-generated emails lessdme-digital-consumer-trends-2025.
The CNET national survey confirms this tension. It shows most U.S. adults are deeply concerned about AI’s impact on human behavior. The fears aren’t abstract—they include reduced critical thinking, loss of empathy, and detachment from human connection.
One additional red flag comes from new arXiv research on large language models. The paper highlights that LLMs change behavior post-deployment due to updates in training data or model architecture. In short, the same GenAI model used today might act differently in three months—creating reliability risk in production systems.
Pattern 1: Maturity = Measurable Impact
Organizations that call themselves “very mature” with AI are not dabbling. They have embedded models into their decision-making processes, deployed apps internally, and aligned teams to support production AI efforts. According to the TDWI report, these companies are 3.5x more likely to have deployed GenAI in production compared to “somewhat mature” firms.
More importantly, mature organizations are more likely to:
Measure AI’s contribution to business outcomes
Track ROI and efficiency metrics
Use structured data alongside text, image, and IoT sources
Employ AI developers and MLOps engineers to ensure scalability and reliability
When companies operate this way, success compounds. Impact leads to better executive buy-in, more investment in talent, and an accelerating feedback loop of improvement.
Pattern 2: Skills and Literacy Are the Hardest Bottlenecks
Technology is not the barrier. The real constraint is human capital. Across both enterprise and consumer data, the most common blockers are lack of skills, low AI literacy, and poor organizational clarity around how and when to use AI.
The TDWI survey makes this plain:
25% say they lack AI model-building skills
24% say their teams lack AI literacy
18% say they lack skills to build AI applications
Only 35% offer AI literacy programs internally
But those who do invest in training are winning. High-impact organizations are 2.7x more likely to offer AI literacy programs than those who struggle to get value. These companies treat literacy not as a one-off course, but as a continuous enablement function.
Pattern 3: Public Trust Is Eroding Fast
Consumers are no longer in the “wow” phase. They’re entering a cautious, skeptical mode. In the Deloitte data from UAE and KSA, people are using GenAI for personal and educational purposes, but only 47% use it for work, and that number may shrink as corporate guardrails go updme-digital-consumer-trends-2025.
Trust issues stem from two places:
Confusion about how AI works and when to trust it
Lack of transparency in company usage of GenAI for customer communication
The result is a growing reluctance to engage with AI systems in brand and service interactions. This puts GTM teams in a difficult position. The efficiency gains from automation are real, but customer experience suffers if trust is broken.
Pattern 4: Drift in LLM Behavior Will Break Static Systems
The arXiv research shows that language models exhibit behavioral drift over time. This means the same prompt might return different results a few weeks apart. If your workflows, playbooks, or user experiences rely on a GenAI model acting predictably, this is a threat vector.
You cannot treat LLMs like stable SaaS tools. You need monitoring systems. You need version tracking. You need change detection.
Organizations that treat GenAI as a black box will fall behind. Those that invest in observability, feedback loops, and performance regression testing will avoid the volatility trap.
Pattern 5: Governance, Not Just Strategy, Drives Execution
Across all sources, the presence of AI governance teams correlated with better outcomes. TDWI found that governance is still rare, but the most successful companies already have formal AI oversight in place, separate from traditional data governance.
Governance isn’t compliance. It’s the discipline of:
Monitoring model performance
Managing bias, fairness, and explainability
Deciding when human override is needed
Aligning AI outcomes with business value
Without this, AI will underperform or create risk. With it, you unlock reliability, clarity, and scale.
What GTM Leaders Should Do Now
This isn’t about catching up. It’s about restructuring how GTM teams operate. You need to embed AI into your people, your systems, and your metrics. Start with five non-negotiables:
1. Formalize AI Literacy Across All Roles
Run role-specific training and establish internal communities of practice. Do not centralize this only in data teams.
2. Set Measurable AI Success Metrics
Track AI contribution to KPIs like cost per opportunity, win rate lift, rep productivity, or cycle time. Tie each AI project to a business impact variable.
3. Invest in Governance and Observability
Stand up a lightweight but cross-functional governance council. Track model drift, usage patterns, failure rates, and hallucination risks.
4. Prepare for Consumer Pushback and Mistrust
Clearly label AI-generated outputs. Let users opt out. Build human-in-the-loop experiences where needed. Transparency will be a growth driver.
5. Build Real-Time Execution Workflows, Not AI Toys
Stop chasing feature demos. Build automated, AI-powered workflows that reduce friction in prospecting, forecasting, onboarding, or renewals.
The Strategic Risk: Waiting
You don’t need to be perfect. But if you’re still running workshops on “What is GenAI?” while others are embedding agent frameworks into RevOps, the gap is no longer conceptual, it’s operational.
The data is clear. Organizations that act now, build readiness, and structure around measurable execution will widen the lead. Everyone else will lose time, trust, and talent.
Your move.