2/26/26: How the Socure RevOps Team Killed Tool Sprawl, Automated Deal Intelligence, and Scaled 2.5x Without Adding Headcount.
Here we are for the GTM AI Podcast where we get to feature some experts in the field and their own workflows.
As a reminder, we changed formats to have guests show us behind the scenes of what they do to use AI in their teams for real results. We also are going to be publishing these 2 times a week because we had so much interest in showing this, we want to get them out in a timely fashion. Today is a goodie with the Revops team from Socure and also created the GTM AI Orchestration Playbook you can see below!
You can go to Youtube, Apple, Spotify as well as a whole other host of locations to hear the podcast or see the video interview.
Socure’s average deal is $600-700K. Sales cycles run 12-18 months. Their CEO sits in account Slack channels with 47 other people. And nobody schedules “sync” meetings anymore.
Evan Morgan and Colin Gerber from SoCure’s Revops team walked me through the exact wiring on this week’s GTM AI Podcast. Four tools. One orchestration layer. Complete deal intelligence flowing to every human who touches the account. Here’s the breakdown:
1) The tools don’t matter. The wiring between them does.
Socure records every call with Clari Copilot. Momentum ingests those transcripts, extracts MEDDPICC fields, flags deal risks, routes product concerns, drafts follow-ups, and pushes structured summaries to account-specific Slack channels. Minutes, not days.
Before this: reps manually summarized calls. Notes hit the right people 48 hours late. Technical conversations never reached executive stakeholders. The AE was the single point of failure for all deal context.
Now: a morning technical call and an afternoon executive call both land in the same channel, automatically, with full context. The SC self-serves. The CEO reads it on his own time. Nobody asks “can you loop me in?”
Most teams buy great tools that operate as isolated islands. The orchestration layer is the entire game.
2) They plugged Glean into Salesforce. It was useless. Then they fixed it.
Colin was blunt. When they first connected Glean (enterprise search) directly to Salesforce, it couldn’t return accurate answers. Custom fields, custom objects, internal naming conventions. Raw CRM data is a mess for AI to parse.
The fix: point Glean at orchestrated data from Momentum instead of raw Salesforce records. Clean, structured call summaries. Curated deal context. Reliable search results.
The output is a one-click pre-meeting brief that pulls 6 months of data into 7 sections:
Executive relationship status
Deals in flight with risk flags
Recent wins and product feedback from actual call transcripts
Internal and external stakeholder maps
Recommended talking points and meeting goals
Four separate teams had been requesting variations of this same brief. Colin built one Glean agent that serves all four. Redundant requests eliminated.
3) Two councils. Zero tool sprawl.
Socure spent two years consolidating their stack. They killed redundant vendors, indexed on Clary’s full suite, and stood up an Enterprise Architecture Council that intercepts new tool requests with a simple question: “We already have something that does this. Why would we buy another?”
They recently added an AI Architectural Council on top. When Colin saw four teams requesting separate agents that did 90% the same work, the council caught it and consolidated.
Their evaluation framework before any new AI capability:
Are we already collecting this data? (80% of the time: yes)
Is there a better capture method than what we’re doing today?
Can we validate AI output with human review before production?
Does it enrich existing systems or create a new silo?
Crawl. Walk. Run. Every time.
4) AI sells widgets. Humans sell $700K deals. For now.
Evan didn’t hedge: AI is coming for every job. But at Socure’s deal sizes, with bespoke solutions, 18-month cycles, and multi-stakeholder buying committees, the relationship is the moat.
Where AI selling lands first: Socure’s self-service and PLG motion. Prepackaged solutions. Predictable pricing. Lower-touch process. Colin called it “widget selling.” Enterprise goes last.
Their next build reflects this: a pricing and packaging agent that enforces quoting guardrails for new reps. Instead of deal desk rejecting bad quotes five times, the agent catches errors before submission. Colin estimates 70% reduction in quoting back-and-forth for new hires.
Why this matters:
Socure didn’t try to boil the ocean. They sequenced: data flow first (Momentum), then searchability (Glean), then governance (councils), then agents on top of clean data.
What to do this week:
Test your enterprise search tool against 5 real account questions. If results are unreliable, your data layer needs curation before your AI layer works.
Count how many teams are requesting similar AI capabilities. Consolidate before you build.
Identify one high-volume workflow (like quoting) where an agent could eliminate 70% of the repetitive back-and-forth.
Stop buying tools. Start wiring them together.
THE GTM AI ORCHESTRATION PLAYBOOK
How to wire your AI tools into a single intelligence layer that compresses sales cycles, eliminates information silos, and scales without proportional headcount.
Framework + Implementation Guide + Templates + Cheat Sheet
Inspired by the content on the GTM AI Podcast episode with Socure’s Colin Gerber & Evan Morgan
Inside This Playbook
01 — The Orchestration Thesis
02 — The O-A-R Maturity Model
03 — The 5-Layer Stack Architecture
04 — Data Flow & Capture
05 — Cross-Team Visibility
06 — Signal Routing & Action
07 — AI Governance & Stack Discipline
08 — Scale & Self-Service
09 — Implementation Templates
10 — The Orchestration Cheat Sheet
01 | The Orchestration Thesis
Why buying AI tools is the easy part. Wiring them together is the whole game.
Socure sells identity verification to enterprise buyers. Average deal size: $600-700K. Sales cycles: 12-18 months. Account teams: 47 people in a single Slack channel. Their CEO is active in deal channels. And nobody schedules “sync” meetings anymore.
That last sentence is the thesis of this entire playbook. When the orchestration layer works, the meetings, the “can you loop me in?” messages, the 48-hour delays on call notes, the stale CRM data that torpedoes your forecast: all of it evaporates.
Most GTM teams approach AI like a shopping spree. Buy a call recorder. Buy an enterprise search tool. Buy a forecasting platform. Each one generates valuable signal. None of them talk to each other. The result: isolated islands of intelligence that nobody trusts enough to act on.
THE CORE PRINCIPLE: AI tools generate signal. The orchestration layer turns signal into action. Without orchestration, you have expensive data storage. With it, you have an intelligence system that runs on autopilot.
Socure by the numbers:
$700K — Avg deal size
47 — People per Slack channel
18 months — Sales cycle
10 — GTM verticals
Socure didn’t get here overnight. Colin Gerber (RevOps) and Evan Morgan (GTM Ops & Strategy) spent two years consolidating their tech stack, killing redundant vendors, and building the orchestration layer that now connects Clary, Momentum, Glean, and Granola into a single intelligence system. This playbook is the distillation of that journey.
02 | The O-A-R Maturity Model
Every AI use case fits one of three stages. Knowing where you are determines what you build next.
During the podcast, we discussed the O-A-R framework from Coach K’s GTM AI Academy that I teach during the AI Strategy Class with Sales Assembly: Optimize, Amplify, Reinvent. Colin placed Socure’s current work “between O and A,” which is exactly the right self-assessment. Most teams overestimate their maturity. Here’s the honest breakdown:
OPTIMIZE Automate what you already do manually. Same process, less human labor. Socure example: Auto-populating MEDDPICC fields from call transcripts instead of reps typing them in. Litmus test: Are reps still manually entering data that AI could capture from calls?
AMPLIFY Do things humans couldn’t do at scale. New capabilities unlocked by AI. Socure example: One-click pre-meeting brief pulling 6 months of orchestrated data across all sources. Litmus test: Can your execs get full account context without scheduling a single prep call?
REINVENT Fundamentally redesign the workflow. The process itself changes. Socure example: Pricing agent that enforces quoting guardrails autonomously. No human in the loop. Litmus test: Do you have any workflows where AI makes decisions, not just recommendations?
THE SEQUENCING TRAP: Most teams try to Reinvent before they’ve Optimized. They build autonomous agents on top of messy data. The agent makes confident wrong decisions. Socure avoided this by spending two years on the O and A layers before touching R. Colin’s pricing agent (Reinvent) works because it sits on top of clean, validated, orchestrated data.
Where AI Selling Fits in the O-A-R Model
Evan was direct on the podcast: “AI is coming for every job.” But at $600-700K deal sizes with bespoke solutions and multi-stakeholder buying committees, the human relationship is still the moat. The nuance is in the segmentation:
Self-service / PLG (Reinvent now): Prepackaged solutions, predictable pricing. Colin called it “widget selling.” AI can handle the full buying process.
Mid-market (Amplify now, Reinvent later): AI handles research, prep, follow-up, proposals. Human handles the relationship and negotiation.
Enterprise (Optimize now): AI captures and routes deal intelligence. Humans own the 18-month relationship. “Not sure where that is on the horizon, but it’ll probably be here quicker than we know.”
03 | The 5-Layer Stack Architecture
Build these layers in sequence. Skip one and everything above it breaks.
LAYER 5: AGENTIC WORKFLOWS Autonomous agents (pricing, quoting, onboarding) that execute with guardrails.Requires validated data, governance, and clean signal routing underneath.
LAYER 4: AI GOVERNANCE Architecture councils, 4-gate evaluation, crawl-walk-run validation. Prevents tool sprawl. Catches redundant builds. Maintains data trust.
LAYER 3: SIGNAL ROUTING AI detects product feedback, churn risk, expansion signals. Routes to the right team.Turns raw data into team-specific actions. Proactive, not reactive.
LAYER 2: CROSS-TEAM VISIBILITY Account channels, pre-meeting briefs, self-service context for all stakeholders.No sync meetings. No “can you loop me in?” Deal context is ambient.
LAYER 1: DATA FLOW & CAPTURE Call recording, transcript ingestion, CRM auto-population, structured output.The foundation. Everything above depends on clean, automated data flow.
THE GLEAN LESSON: WHY SEQUENCE MATTERS. Colin plugged Glean directly into Salesforce. It was not as helpful as he wanted. Custom fields, custom objects, internal naming conventions. Raw CRM data is a mess for AI to parse. The fix: point Glean at orchestrated data from Momentum instead. Clean, structured summaries. Reliable search. Layer 2 (visibility via Glean) only worked after Layer 1 (data flow via Momentum) was solid.
04 | Data Flow & Capture
The Socure Wiring
Clari Copilot records every call. Momentum ingests transcripts, extracts structured data, and pushes summaries to account-specific Slack channels. A custom Granola note-taker ingestion agent (built with Zapier) captures additional meeting context and attaches it to CRM accounts and opportunities. Every data source converges.
Implementation Sequence
WeekActionSuccess Metric1-2Audit call recording coverage. What % of customer calls are recorded?Baseline recording rate established3-4Connect recorder to orchestration tool. Configure CRM field mapping.First auto-populated CRM record5-6Pilot with one deal team. Human-in-the-loop validation of every AI output.>85% accuracy vs. manual entry7-8Refine mappings from pilot feedback. Expand to full sales org.>90% auto-population rate, team adoption
The Noise vs. Signal Framework
Evan made a critical point on the podcast: “With AI, you have the potential to just add so much noise to the system.” Socure has been intentional about not adding fields just because AI can populate them. Their test:
Add it if it drives a decision: forecasting, deal routing, prioritization
Skip it if it’s interesting but nobody will act on it
Validate first by running AI output alongside human judgment for 30 days
Keep human entry + AI cross-check for judgment-dependent fields (loss/churn reason)
05 | Cross-Team Visibility
Make deal context self-service. Kill the sync meeting forever.
Socure runs account-specific Slack channels with 47+ members. A morning technical call and an afternoon executive call both land in the same channel automatically, with full context. The SC self-serves. The CEO reads it on his own time. The CSM knows what was promised in the sales cycle. Nobody asks “can you loop me in?”
The Pre-Meeting Brief Agent (Glean)
Colin built a Glean agent that generates a one-click pre-meeting brief from 6 months of orchestrated data. He demoed the USAA account brief on the podcast. Seven sections:
Executive relationship status and sentiment
Current account status: deals in flight, risk flags
Recent major wins and product expansions
Risks and dependencies: product issues, competitive threats, contract concerns
Stakeholder map: internal team + external contacts with roles
Recommended talking points and meeting goals
Appendix: referenced Slack threads, opportunities, documents
THE CONSOLIDATION INSIGHT: Four separate teams (marketing, product, sales, executive) were requesting variations of the same brief. Colin triangulated with Socure’s internal AI team and realized these were “actually kind of the same thing.” One agent now serves all four. He built it once and eliminated four redundant build requests. Always ask: “Is another team requesting something 90% similar?”
06 | Signal Routing & Action
Turn raw call data into team-specific actions that arrive automatically.
Evan demoed Socure’s product feedback channel on the podcast. Momentum listens for keywords related to actionable product feedback across all calls. Product leaders monitor the channel daily. Before this system: product issues took 3+ weeks to cascade from a customer call to the product team. Now: same day.
The AI + Human Cross-Check Pattern
Socure still mandates that reps enter loss and churn reasons manually. But now Momentum cross-references that answer against what actually happened in call transcripts. When there’s a discrepancy, it gets flagged.
Rep enters: “Budget constraints”
AI analyzes last 5 calls: competitor pricing discussed 3x, product gap mentioned 2x
System flags discrepancy for manager review
Actual reason: competitive displacement, not budget
This gives you rep accountability plus AI validation. Your churn and loss analysis becomes dramatically more reliable. The dropdown problem (”picked whatever closes the screen”) is solved without removing human judgment.
WHAT’S NEXT: AGGREGATE SIGNAL ANALYSIS. Socure just saw a demo of Momentum’s ExecBriefs 2, which does aggregate analysis across all signals over a week. Pattern detection at scale: “Across all calls this week, product X was mentioned 14 times with negative sentiment. 8 of those were from accounts in renewal.” That’s the Amplify layer turning into Reinvent.
07 | AI Governance & Stack Discipline
Two councils. Four gates. Zero tool sprawl.
Socure spent two years consolidating their tech stack. They killed redundant vendors, evaluated overlapping functionality, and indexed on Clary’s full suite for email logging, call recording, and forecasting. The discipline is structural, not aspirational.
The Two-Council Model
Enterprise Architecture Council
All new tool purchases and integrations”Do we already have something that does this?”
AI Architectural Council
All new AI agent builds and LLM use cases”Is another team building something 90% similar?”
The 4-Gate Evaluation Framework
Before approving any new AI capability:
Gate 1: Are we already collecting this data? 80% of the time: yes. The data exists in the wrong system, entered manually, or delayed. Fix the pipe before buying a new bucket.
Gate 2: Is there a better capture method than what we do today? If manual, automate. If siloed, integrate. Don’t create a parallel system.
Gate 3: Can we validate AI output with human-in-the-loop? Every new AI field or agent goes through pilot with human review. Sense-check for 30 days.
Gate 4: Does this enrich existing systems or create a new silo? Output should feed back into CRM and orchestration. Standalone tools nobody checks = wasted budget.
The Crawl-Walk-Run Validation Process
StageDurationHuman InvolvementExit CriteriaCrawl2-4 weeksHuman reviews every AI output before production>85% accuracy, zero critical errorsWalk4-8 weeksHuman spot-checks 20%. Discrepancies reviewed.>92% accuracy, stable processRunOngoingAutomated monitoring. Human on exceptions only.Continuous accuracy monitoring
08 | Scale & Self-Service
Build AI infrastructure that grows with the business, not the headcount.
Socure scaled from 4 GTM verticals to 10. They’re hiring “a lot of new reps.” Evan’s question: “How do we scale with all those reps by not necessarily hiring a bunch of new headcount on the RevOps side?” The answer is self-service AI and documented process.
THE PRICING AGENT: SOCURE’S NEXT BUILD. Colin is most excited about the pricing and packaging agent Evan is building. “That is a lot of questions and a lot of hand-holding with all the new people coming on, that we could probably cut down 70% of if we have that.” The agent follows pricing guardrails, catches non-standard deal structures before they reach deal desk, and guides reps through what a standard deal looks like. Reinvent-stage thinking built on Optimize-stage data.
The Documentation & Evangelism Imperative
Colin was explicit: “We both may not be here or be available at some point, but want people to be able to know what we have, what it does, and how to use it.” Two priorities:
Documentation: What each tool does, why it exists, how to use it, what to do when it breaks. If the builders leave, can someone else maintain it?
Internal evangelism: Proactive enablement. Video walkthroughs. Adoption tracking. The best AI stack is worthless if nobody uses it. Evan noted: “Once you start documenting, you start seeing where things took much longer than they should have.”
THE SCALING TEST: If you added 10 reps next quarter, how many additional RevOps hours would that require? If the answer is more than 5% of current capacity, your self-service layer needs work. Target: near-zero marginal ops cost per new rep.
Threshold: Average score 3.5+ to proceed. Any single criterion scoring 1 = automatic rejection.
Template 2: Call Summary Prompt
Starting point for your orchestration tool’s call summary output:
QUICK RECAP (2-3 sentences): What happened? Key outcome.
ATTENDEES: All participants with role/title.
KEY DISCUSSION TOPICS: Top 3-5 topics, one sentence each.
RISKS IDENTIFIED: Concerns, objections, competitive mentions, timeline risks.
NEXT STEPS: Action items with owners and deadlines.
MEDDPICC UPDATE: New information per element. “No update” for unchanged.
PRODUCT SIGNALS: Feature requests, product concerns, new use cases. Flag if actionable.
SUGGESTED FOLLOW-UP EMAIL: Draft to primary contact summarizing next steps.
10 | The Orchestration Cheat Sheet
One page. Pin it. Execute weekly.
THE 5-LAYER SEQUENCE
Data Flow: Calls recorded > Transcripts ingested > CRM auto-populated > Slack delivered
Visibility: Account channels > Pre-meeting briefs > Self-service context for all stakeholders
Signal Routing: Product feedback > Churn risk > Expansion signals > Competitive intel (auto-routed)
Governance: Enterprise Arch Council + AI Arch Council > 4-gate eval > Crawl-walk-run validation
Scale: Self-service agents > Documentation > Evangelism > Near-zero marginal ops cost per rep
THE O-A-R CHECK
Optimize: Automate what you already do manually (CRM fields, call notes, follow-ups)
Amplify: Do things humans can’t at scale (one-click briefs from 6mo of data, aggregate signal analysis)
Reinvent: Fundamentally change the workflow (autonomous pricing agents, self-service buying)
Know your stage. Sequence accordingly. Don’t skip layers.
THE 4 EVALUATION GATES
Are we already collecting this data? (80% of the time: yes)
Is there a better capture method than manual?
Can we validate AI output with human-in-the-loop?
Does output enrich existing systems or create a new silo?
THE NOISE VS. SIGNAL TEST
Add it if it drives a decision (forecasting, routing, prioritization)
Skip it if it’s interesting but not actionable
Validate first: AI output vs. human judgment for 30 days
Keep human entry + AI cross-check for judgment-dependent fields
THE CONSOLIDATION CHECK
Before building ANY agent: “Is another team requesting something 90% similar?”
Before buying ANY tool: “Do we already have something that does this?”
Before adding ANY field: “Will this drive a decision, or just add noise?”
THE SCALING TEST
If you added 10 reps tomorrow, how many additional RevOps hours does that require?
Target: <5% increase in ops capacity per rep added
Track: top 10 most-asked RevOps questions > automate top 5
WHERE AI SELLS FIRST
Self-service / PLG: AI handles the full buying process (Reinvent now)
Mid-market: AI handles prep + follow-up, humans handle relationships (Amplify now)
Enterprise: AI captures + routes intel, humans own the relationship (Optimize now)










