3/12/26: Your Enablement Playbook Is Your AI Moat: The Complete Cowork GTM Playbook
Once again so excited to have this next podcast with Victor Adefuye who is a returning guest and shows off his amazing Claude Cowork workflows that you will geek out on.
Let’s get into it!
You can go to Youtube, Apple, Spotify as well as a whole other host of locations to hear the podcast or see the video interview.
I watched Victor Adefuye research 15 companies, score 7 sales calls, and build a Make.com automation. All running simultaneously. From three-sentence prompts.
Victor spent 5 years as an MD at Winning by Design. He’s been teaching reps how to write emails, run discovery, and execute MEDDPICC for over a decade. That matters, because the punchline here isn’t “AI is powerful.” It’s that the people who already know what good looks like are the ones building the most dangerous agents.
On this week’s GTM AI Academy episode, Victor walked through three live workflows in Claude Cowork that should make every GTM leader rethink how their team operates. Here’s what actually matters:
1) Lead research at scale just became a one-person operation.
Victor took 360 conference leads, pointed Cowork at the folder, and typed three sentences. Cowork spun up 15 parallel sub-agents. Each one researched a company independently, scored it against his ICP, and came back with a fit assessment and outreach angle. Total hands-on time: about 90 seconds of typing.
But here’s the real leverage. Victor connected a second project folder containing his 110-page content library (tagged by pain point), his case studies, and his email voice filter. The agent matched each prospect’s pain points to the right content assets and wrote personalized nurture sequences. Not “Hi {first_name}” personalization. Actual strategic personalization that connected their business challenges to specific resources.
The workflow:
Point Cowork at a lead list folder
Build a “Research Prospect” skill with your ICP criteria and scoring rubric
Connect a project folder with your content library, case studies, and email examples
Type your prompt. Let parallel sub-agents do the rest.
What took Victor’s team hours now runs while he grabs coffee.
2) MEDDPICC scoring at scale exposes coaching gaps you can’t see one call at a time.
Victor built a custom MEDDPICC extraction skill with a calibrated 0-2 scoring system. Zero means the element never came up. One means it was mentioned but shallow. Two means it was thoroughly covered with clear next steps.
He fed 7 call transcripts into the folder. Cowork launched 7 parallel sub-agents, each analyzing one call. Individual scorecards came back with direct quotes and timestamps so you can verify every score. Then a synthesis agent rolled up the results into a team-wide gap report.
The finding from his sample data: reps were weakest on engaging the economic buyer, developing champions, and clarifying the paper process. That’s not a guess. That’s scored evidence across multiple calls.
Why this changes coaching: Instead of training the whole team on all 8 elements of MEDDPICC, managers can now see that Jonathan needs work on Economic Buyer while Victor needs help quantifying metrics. Personalized coaching paths based on actual call data, not manager intuition.
The key design insight: Victor learned early that AI scoring is unreliable without detailed examples for each score level. A “rate this 0-2” prompt gives inconsistent results. A prompt that says “a zero looks like this specific exchange, a one looks like this, a two looks like this” with real examples for every element produces scores you can trust.
3) Non-technical GTM leaders can now build production automations through conversation.
Victor is not a developer. He’s been “automation curious” for years but intimidated by the technical barrier. In the last 2 weeks, he built 7 Make.com scenarios through Claude Cowork.
The one that stopped me: He gave Cowork a folder of automation ideas he brainstormed over Christmas. Just rough notes. Cowork reviewed them, told him which ones it could build with minimal help, and started. It opened Make.com in his browser, selected modules, configured connections, and wrote the prompts for each step.
One automation takes his Fathom call transcripts, classifies them (client call vs. sales call vs. other), writes a recap with action items, drafts a follow-up email in his voice, and saves it as a Gmail draft. After 7 back-to-back calls, he opens his drafts folder and every follow-up is waiting.
Another runs weekly Perplexity searches for recently posted CRO, GTM Engineer, and Chief AI Officer roles. It finds who the new hire reports to, then builds a drip campaign using those hiring triggers.
The pattern: Know what you want to automate. Describe the workflow in plain language. Let the agent handle the technical execution.
The tactical shift:
Victor said something that stuck: “The same training I used to give to humans, and hope they would remember, I now give to AI. It’s more reliable.”
That’s the unlock. If you’ve spent years defining what good looks like in your org (good emails, good discovery calls, good qualification), you’re sitting on the raw material for the most powerful AI agents in your space.
What to do this week:
Pick one repeatable workflow your team does manually (lead research, call scoring, follow-up emails)
Document what “good” looks like for that workflow. Be specific: examples, scoring criteria, templates.
Package it as a Cowork skill and test it on 5-10 real inputs
Compare the output to your best human performer
The gap between “AI curious” and “AI dangerous” is one skill file and the willingness to test it.
3. LEAD MAGNET: The Cowork GTM Playbook
How to Build the 3 Highest-ROI GTM Workflows in Claude Cowork (Step-by-Step)
Inspired by Victor Adefuye’s live demos on the GTM AI Academy Podcast. Written by Coach K.
Most people open Cowork, type a prompt, and get a mediocre result. Then they say “AI isn’t ready yet” and go back to doing everything manually.
The problem isn’t the tool. It’s that they skipped the part that actually makes agents useful: teaching the agent what good looks like before asking it to perform.
Victor Adefuye built three workflows that replaced hours of manual work. Lead research that scores and prioritizes 360 prospects. MEDDPICC analysis across 10 sales calls simultaneously. Make.com automations built through conversation. None of this required code. All of it required one thing: packaging your expertise into skills first.
This playbook breaks down exactly how to build each one.
WORKFLOW 1: Automated Lead Research & Personalized Outreach
What it does: Takes a raw lead list, researches each company in parallel, scores against your ICP, prioritizes the top prospects, and writes personalized nurture emails that match prospect pain points to your content assets.
Time comparison:
Manual: 3-5 hours for 15 leads (research + email drafting)
With Cowork: ~15 minutes of setup, runs autonomously
What you need before you start:
Before you touch Cowork, prepare these assets:
A. Your ICP Scoring Document Write out exactly what makes a lead a good fit for your business. Be ruthlessly specific.
Include:
Company size range (e.g., 50-500 employees)
Industry vertical (e.g., B2B SaaS with outbound sales motion)
Revenue range
Technology signals (e.g., uses Salesforce, has a RevOps team)
Disqualifiers (e.g., pre-revenue startups, enterprise 10K+ employees)
Scoring rubric: What’s a 9/10 fit vs. a 5/10 vs. a 2/10? Give examples of each.
B. Your Content Library Create a document (Google Doc, markdown file, whatever) with:
Every piece of content you’d share with a prospect (articles, podcasts, case studies, white papers, tools)
Tag each one by topic and pain point it addresses
Include a one-line summary of what the prospect gets from it
Victor’s content library is 110 pages. Yours doesn’t need to be. Start with 10-20 of your best assets tagged by pain point.
C. Your Email Voice File Write 3-5 examples of emails you’ve actually sent that you thought were great. Include:
The structure you prefer (how you open, how long paragraphs are, how you close)
Phrases you actually use vs. phrases that sound nothing like you
What you never want the agent to write (Victor specifically trained his to avoid generic pitches)
D. Your Lead List A spreadsheet with company name, contact name, title, and any notes. Put it in a folder Cowork can access.
Building the Skill:
Create a skill called “Research Prospect” (or whatever matches your workflow). Inside, include:
SKILL.md contents:
## Purpose
Research prospects from a lead list, score them against my ICP, and identify the best outreach angle for each.
## Research Process
For each lead, find:
- Company overview (what they do, size, funding stage, growth signals)
- Sales motion (inbound vs outbound, team size, tech stack)
- Key stakeholders (who makes buying decisions for [your category])
- Recent news or triggers (new hires, funding rounds, product launches)
- Competitive landscape (who else sells to them)
## Scoring Rubric
[Paste your ICP scoring document here]
## Output Format
For each lead, produce:
- Fit score (1-10) with rationale
- Top outreach angle (the specific pain point or trigger to reference)
- Recommended content assets to share (matched from content library)
## Prioritization
Rank all leads by fit score. Flag the top [X] for immediate outreach.The Prompt (yes, it’s this short):
“Attached is a folder with my lead list from [event/source]. Prioritize my outreach to the top [X] leads. Write personalized emails to the top 3 using the instructions and knowledge base from [your nurture project folder].”
What happens next:
Cowork reads the spreadsheet
It asks clarifying questions (which lead category to focus on, any filters)
It launches parallel sub-agents to research each company
It scores and ranks them
It matches pain points to your content library
It writes personalized emails and runs them through your voice filter
It self-checks quality before presenting the output
Pro tips:
Tell Cowork to “run the research in parallel” if it starts doing them one at a time
Connect a second project folder with your email templates and content library so the agent has everything in one prompt
After the first run, review the scoring. If a 7/10 lead should have been a 4, refine your rubric with more examples.
WORKFLOW 2: MEDDPICC Call Analysis at Scale
What it does: Analyzes multiple sales call transcripts simultaneously, scores each element of MEDDPICC on a calibrated scale, produces individual scorecards with evidence, and synthesizes a team-wide gap analysis.
Time comparison:
Manual review: 45-60 minutes per call
With Cowork: 7 calls analyzed simultaneously in ~10 minutes
What you need before you start:
A. Your MEDDPICC Scoring Rubric (this is the whole game)
For EACH element of MEDDPICC (Metrics, Economic Buyer, Decision Criteria, Decision Process, Identify Pain, Champion, Competition, Paper Process), define:
Score 0: Element never came up. No mention, no discussion, no attempt.
Example: “The rep never asked who would sign the contract or what the approval process looks like.”
Score 0.5: Brief mention, no depth.
Example: “The prospect mentioned their VP would need to approve, but the rep didn’t ask any follow-up questions about the VP’s priorities or timeline.”
Score 1: Discussed but surface-level. Some information gathered, clear gaps remain.
Example: “The rep asked about decision criteria and learned they care about integration speed, but didn’t quantify what ‘fast’ means or ask about competing priorities.”
Score 1.5: Solid discussion with some tactical depth. Most information gathered.
Example: “The rep identified the economic buyer by name, learned their top 3 priorities, but hasn’t met them directly or confirmed budget authority.”
Score 2: Thoroughly covered. Clear evidence, specific details, and next steps tied to this element.
Example: “The rep met the economic buyer, confirmed budget of $X allocated for Q2, aligned the proposal to their stated priority of reducing churn by 15%, and has a follow-up meeting scheduled to review the business case.”
This calibration is what separates reliable scoring from AI guesswork. Victor learned this through years of iteration. Without these examples, the model gives different scores for the same transcript on different runs. With them, it’s consistent.
B. Your Call Transcripts Put them in a folder. VTT, TXT, or any text format works. Label them clearly (company name or rep name).
Building the Skill:
SKILL.md contents:
## Purpose
Analyze sales call transcripts using MEDDPICC framework with calibrated scoring.
## Scoring System
[Paste your full 0-2 scoring rubric with examples for EVERY element]
## Analysis Requirements
For each call, produce:
1. MEDDPICC Scorecard (score for each element)
2. Direct quotes from the transcript supporting each score (with timestamps)
3. Gap analysis: What should the rep have asked or done differently?
4. Suggested follow-up questions for the next call
5. Overall deal health assessment
## Synthesis Report (after all individual analyses)
- Average score per MEDDPICC element across all calls
- Biggest team-wide gaps (lowest-scoring elements)
- Rep-by-rep comparison
- Recommended coaching focus areas for the next quarter
- Chart or table visualization of scores
## Quality Check
Always cite specific quotes. Never assign a score without evidence from the transcript. If an element wasn't discussed, score it 0 and note it as a gap.The Prompt:
“Analyze the calls in this folder using your MEDDPICC extraction skill. Run the analysis in parallel.”
What to do with the output:
Use the team-wide gap report to set quarterly coaching priorities
Use individual scorecards in 1:1s to give reps specific, evidence-based feedback
Track scores over time to measure coaching impact
Share the synthesis with leadership as a pipeline health diagnostic
The calibration loop: After the first batch, spot-check 2-3 calls you know well. If scores don’t match your assessment, add more examples to your rubric at the boundary where it’s getting confused (usually the 1 vs. 1.5 range). Two iterations usually gets it to 90%+ agreement with human scoring.
WORKFLOW 3: Building Automations Through Conversation
What it does: You describe what you want automated in plain language. Cowork opens Make.com (or Zapier) in your browser and builds it for you. Selects modules, configures connections, writes prompts, and sets up the logic.
Time comparison:
Learning Make.com + building: 5-20 hours per automation
With Cowork: 1-3 hours per automation, zero prior Make.com knowledge required
What you need before you start:
A. Your Automation Brainstorm Document Write out each automation you want. Be specific about:
The trigger (what kicks it off)
The inputs (where does the data come from)
The logic (any classification, filtering, or decisions)
The output (what happens at the end)
The tools involved (Gmail, Google Docs, Slack, CRM, etc.)
Example from Victor:
“When a call recording finishes in Fathom, take the transcript, save it to Google Docs, classify the call as client/sales/other, send it to Claude for a recap with action items, draft a follow-up email in my voice, and save it as a Gmail draft.”
B. Your Make.com/Zapier Account Have it set up and logged in. Connect the apps you’ll need (Gmail, Google Drive, Slack, your CRM). Cowork will ask you to handle API keys and logins. It refuses to touch credentials (which is smart).
C. The Make.com Connector Enabled in Cowork Make sure your automation platform’s connector is turned on in Cowork settings.
The Process:
Give Cowork access to your brainstorm folder
Ask: “Review the automations in this folder. Tell me which ones you can help me build with minimal help from me.”
It’ll assess feasibility and complexity for each one
Pick the one with highest ROI and say “Let’s build it”
Cowork opens Make.com in your browser and starts configuring
It will stop and ask you for help when it needs credentials, API keys, or account-specific decisions
Test the automation with real data
Iterate on the prompts inside the automation until the output quality is right
Victor’s automation portfolio (built in 2 weeks):
Call transcript processor: Fathom → Google Docs → classification → Claude recap → Gmail draft
CRO hiring trigger: Weekly Perplexity search → identify new CRO postings → find reporting structure → build drip campaign
GTM Engineer trigger: Same pattern for GTM Engineer roles
Chief AI Officer trigger: Same pattern for CAIO roles
Pro tips:
Start with the automation that saves you the most daily time, not the most impressive one
When Cowork builds a prompt for a module inside the automation, review it. Add your voice examples and quality standards.
If it gets stuck, describe the step you want in more detail. It’s remarkably good at figuring out which Make.com module to use when you describe the function.
Never paste API keys into the chat. Cowork will tell you where to insert them manually.
THE META-FRAMEWORK: Why Skills Are the Moat
Everything Victor demonstrated traces back to one principle: he packaged his expertise before asking AI to perform.
The lead research works because he spent time defining his ICP scoring rubric. The call analysis works because he calibrated the MEDDPICC scoring with detailed examples at every level. The personalized emails work because he built a tagged content library and a voice filter.
This is the uncomfortable truth about AI agents: they’re exactly as good as the instructions you give them. And the people who have spent years defining what “good” looks like in GTM (enablement leaders, sales trainers, RevOps architects) are sitting on the most valuable raw material.
The skill file is the new competitive moat. Not the AI platform. Not the prompt. The packaged expertise that took years to develop and minutes to deploy.
Your action plan:
This week: Pick one repeatable workflow. Document what “good” looks like for it. Be obsessively specific.
Next week: Turn that documentation into a Cowork skill. Test it on 5-10 real inputs.
Week 3: Refine based on output quality. Add more examples where the scoring or output is inconsistent.
Week 4: Stack a second skill. Connect them. Watch the compounding begin.
The people building skills right now will have a 6-12 month head start on everyone who’s still typing one-off prompts into a chat window.


