<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[GTM AI Podcast & Newsletter: AI Education Playbook for Professionals]]></title><description><![CDATA[The Playbook is a practical how-to library for business professionals who want to get real work done with AI tools.  

Not theory, not hype, not "here's what AI can do." 

Step-by-step workflows, copy-paste prompts, and specific instructions for the tools you already use or are deciding between.]]></description><link>https://www.gtmaipodcast.com/s/ai-business-network</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 01:58:24 GMT</lastBuildDate><atom:link href="https://www.gtmaipodcast.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Coach K and J Moss]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[gtmaiacademy@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[gtmaiacademy@substack.com]]></itunes:email><itunes:name><![CDATA[Coach K]]></itunes:name></itunes:owner><itunes:author><![CDATA[Coach K]]></itunes:author><googleplay:owner><![CDATA[gtmaiacademy@substack.com]]></googleplay:owner><googleplay:email><![CDATA[gtmaiacademy@substack.com]]></googleplay:email><googleplay:author><![CDATA[Coach K]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Agents Are Hands. The Knowledge Graph Is the Brain.]]></title><description><![CDATA[104 agents without a shared memory is 104 consultants in a Slack channel who never read each other&#8217;s messages.]]></description><link>https://www.gtmaipodcast.com/p/agents-are-hands-the-knowledge-graph</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/agents-are-hands-the-knowledge-graph</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Tue, 21 Apr 2026 11:13:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!V5W7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>That sentence has been sitting in the back of my head for six months. I&#8217;ve watched a lot of companies go wide on agents, proud of the count, proud of the range, proud of the &#8220;AI transformation&#8221; slide in the board deck. And then I ask what happens on a Tuesday morning when the marketing agent learns something important about a competitor, and the sales agent picks up a call with that same competitor an hour later. Nothing happens. The second agent starts cold. The first agent&#8217;s insight evaporates the moment the session ends. The count on the slide is a lie.</p><p>Revenue is engineered, not hoped for. The same thing is true for agentic systems. More agents don&#8217;t produce more intelligence if the underlying system has no memory.</p><h2>What I Got Wrong The First Time</h2><p>I&#8217;ll save you the lecture and lead with the confession. My first version of this system was genuinely embarrassing.</p><p>I had a folder of agent files. I had prompts I was proud of. I had a handful of skills wired into Claude Code. What I did not have was any way for those agents to know what any other agent had ever done, said, or learned. Every session started at zero. Every question got researched again. The CMO agent would produce a positioning brief on Monday. The sales enablement agent would ask the same positioning questions on Wednesday. I was paying in tokens and time for work I&#8217;d already done.</p><p>The deeper failure was that I couldn&#8217;t see it. The individual outputs looked great. Each agent, on its own, produced a useful artifact. It was only when I tried to chain them that the gaps showed up. A research agent would cite a stat. A writing agent would misquote it by 15%. A review agent would miss the misquote because it had never seen the original. Each agent was locally competent and globally incoherent.</p><p>That&#8217;s the failure mode nobody talks about when they show off their agent count. Agents that can&#8217;t share memory are not a system. They are a cast.</p><h2>The Extended Analogy, Then The Twist</h2><p>Most people think about agents like hiring. You&#8217;ve got a CMO, a CRO, a controller, a research analyst. Each one is a specialist. Each one has a job description. You build your org chart, you set goals, you run 1:1s, and the work gets done.</p><p>That framing is almost right, and the &#8220;almost&#8221; is the part that kills companies.</p><p>A real hire has shared context by default. They sit in the same meetings. They hear the same hallway conversations. They read the same Slack threads. They remember what happened last quarter because they lived through it. An agent has none of that. An agent is a contractor on their first day, every day, forever, unless you build the substrate that gives them continuity. Your org chart of agents is not 104 hires. It is 104 contractors walking into a building with no elevator. They can each do great work on floor 7. They just can&#8217;t get to floor 8.</p><p>The knowledge graph is the elevator.</p><h2>Why Memory Is A Layer, Not A Feature</h2><p>The Revenue Nervous System has six layers: Data, Intelligence, Context, Memory, Orchestration, Execution. I get asked all the time why Memory is a layer and not a capability inside Orchestration or Context. Because Memory is what makes the other layers cooperate.</p><p>Data without Memory is a warehouse that forgets yesterday. Intelligence without Memory is a model that re-derives the same pattern every morning. Context without Memory is a retrieval step that never learns which retrievals worked. Orchestration without Memory is routing that treats every request like it&#8217;s the first one. Execution without Memory is a writeback that nobody reads.</p><p>Memory is what turns a stack of cooperating services into a system that compounds. Without it, every run is independent. With it, the week after produces better answers than the week before, because the week before left a trace.</p><p>The shape of the fix is a knowledge graph with four layers that stay in sync whenever anything gets written. Visual, context, vector, temporal. Each answers a different question. Together they are the Memory Layer. Agents that can&#8217;t share memory are a cast. Agents that can share memory are a team.</p><p>That is the framing. The next question is the only one that actually matters for operators: what does this look like on disk, what runs when, and how do you build it yourself without hiring a platform team.</p><div><hr></div><h2>How You Actually Build It</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V5W7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V5W7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png 424w, https://substackcdn.com/image/fetch/$s_!V5W7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png 848w, https://substackcdn.com/image/fetch/$s_!V5W7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png 1272w, https://substackcdn.com/image/fetch/$s_!V5W7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V5W7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png" width="1111" height="808" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:808,&quot;width&quot;:1111,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:298978,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.gtmaipodcast.com/i/194901709?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V5W7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png 424w, https://substackcdn.com/image/fetch/$s_!V5W7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png 848w, https://substackcdn.com/image/fetch/$s_!V5W7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png 1272w, https://substackcdn.com/image/fetch/$s_!V5W7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3406755-b6ba-413e-b919-ef020dcd294d_1111x808.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/agents-are-hands-the-knowledge-graph">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[How to Build Orchestration Agents and Smart Routing]]></title><description><![CDATA[The three layers that turn a folder of prompts into a working AI team, plus the step-by-step to build each one.]]></description><link>https://www.gtmaipodcast.com/p/how-to-build-orchestration-agents</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/how-to-build-orchestration-agents</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Mon, 20 Apr 2026 19:26:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XJA2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people build AI systems the way you&#8217;d build a spice rack. Collect the jars. Label them. Arrange them neatly. Stand in front of the rack every time you cook, read the labels, pick a jar, wonder why dinner takes two hours.</p><p>A real kitchen has a line. Ticket comes in, the expediter reads it, it goes to the station that can cook that plate, the plate comes back, it ships. The cook doesn&#8217;t read every label every time. The expediter does. The cook cooks.</p><p>That&#8217;s the difference between a prompt library and an orchestrated system. It&#8217;s also why most AI rollouts stall the second the novelty wears off.</p><p>This guide is how you build the expediter. Three working layers by the end: a registry that lists your specialists, a router that picks the right one for a given task, and an orchestrator that coordinates handoffs when the job needs more than one pair of hands.</p><p>You don&#8217;t need to be technical. You need to be able to write a clear one-sentence job description and name the things your team actually ships. That&#8217;s the whole skill. If you can run a team, you can build this. The parts that look like code (the registry, the instruction files) are patterns your AI can generate for you once you tell it what each specialist does. Your job is the thinking. The typing is automatable.</p><p>If you&#8217;re a founder, a GTM operator, or an exec who has been watching your people bounce between ten tools and ten chat windows, this is the piece that makes the mess coherent.</p><div><hr></div><h2>The Problem With One Giant Prompt</h2><p>You&#8217;ve probably tried the one-giant-prompt approach. A single system prompt that says &#8220;you are a world-class marketer and salesperson and engineer and legal advisor.&#8221; It kind of works. Until it doesn&#8217;t.</p><p>Three things break it:</p><ol><li><p><strong>Context dilution.</strong> Every capability you bolt on makes every answer a little blurrier. The model can&#8217;t specialize in ten things at once, because specialization is depth of frameworks, not just knowledge of terminology.</p></li><li><p><strong>No composability.</strong> You can&#8217;t hand off a sub-task. It&#8217;s one persona, so every task starts from zero context.</p></li><li><p><strong>No accountability.</strong> When the output is wrong, you don&#8217;t know which part of the prompt failed. You tune the whole thing and hope.</p></li></ol><p>Orchestration fixes all three. Each agent gets a narrow job, a clear handoff interface, and a visible mandate that either works or needs rewriting. The system becomes diagnosable. Diagnosable is the precondition for improvable.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XJA2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XJA2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png 424w, https://substackcdn.com/image/fetch/$s_!XJA2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png 848w, https://substackcdn.com/image/fetch/$s_!XJA2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png 1272w, https://substackcdn.com/image/fetch/$s_!XJA2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XJA2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png" width="907" height="453" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:453,&quot;width&quot;:907,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:85830,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.gtmaipodcast.com/i/194834578?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XJA2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png 424w, https://substackcdn.com/image/fetch/$s_!XJA2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png 848w, https://substackcdn.com/image/fetch/$s_!XJA2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png 1272w, https://substackcdn.com/image/fetch/$s_!XJA2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d1768-a3fd-4bd7-af68-1215886befde_907x453.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p>
      <p>
          <a href="https://www.gtmaipodcast.com/p/how-to-build-orchestration-agents">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Schedule]]></title><description><![CDATA[It&#8217;s 6:47 AM and the competitive intel report is already waiting in your inbox.]]></description><link>https://www.gtmaipodcast.com/p/claude-schedule</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-schedule</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Thu, 26 Mar 2026 12:32:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a1092679-3bfc-4bb9-a46a-8d020d8f346d_1400x1075.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s 6:47 AM and the competitive intel report is already waiting in your inbox. You didn&#8217;t write a prompt this morning. You didn&#8217;t even open your laptop yet. Claude ran the analysis at 5:00 AM, pulled the latest from your tracked competitors, compared it against last week&#8217;s snapshot, and delivered a summary to your inbox before your alarm went off.</p><p>That&#8217;s not a feature. That&#8217;s a fundamentally different relationship with AI.</p><p>Most people use Claude the same way they use Google &#8212; reactively. They have a problem, they open the app, they type something. Claude answers. Session ends. Claude forgets it happened. You come back tomorrow and start over. The model is sitting idle 23 hours a day, waiting for you to remember it exists.</p><p>Claude Schedule breaks that pattern. It turns Claude from a tool you pick up into a system that runs alongside your work &#8212; processing, monitoring, reporting &#8212; whether you&#8217;re in the room or not. Cowork has <code>/schedule</code> for recurring task cadences. Claude Code has <code>/loop</code> for tight operational cycles. Together they give you something most executives don&#8217;t think to build: a Claude that works the night shift.</p><p>Here&#8217;s how to set it up.</p><div><hr></div><h2>Step 1: Understand What Schedule Actually Is</h2><p>Before you configure anything, get the mental model right, because the failure mode here is treating Schedule like a calendar reminder. It&#8217;s not.</p><p>A calendar reminder says &#8220;do this at 9 AM.&#8221; Schedule says &#8220;run this entire Claude workflow at 9 AM, with no human in the loop.&#8221; That&#8217;s a different thing. It means the prompt you write today will execute &#8212; unchanged &#8212; on Tuesday, next Tuesday, and the Tuesday after that. The output quality scales directly with the prompt quality. A lazy prompt produces lazy recurring output. A sharp prompt produces sharp recurring intelligence.</p><p>There are two implementations.</p><p><strong>Claude Cowork </strong><code>/schedule</code> is designed for business cadences &#8212; daily briefings, weekly reports, Monday morning pipeline summaries. You set a natural-language schedule (&#8221;every weekday at 7 AM&#8221;), attach a prompt or task, and Cowork runs it on that interval. Output lands wherever you&#8217;ve configured it: in the conversation thread, in a connected integration, in a document. Think of this as your recurring content and intelligence layer.</p><p><strong>Claude Code </strong><code>/loop</code> is designed for operational monitoring &#8212; tighter cycles, often measured in minutes rather than days. The syntax is direct: <code>/loop 5m /check-deploy</code> runs the <code>/check-deploy</code> command every five minutes until you stop it. This is your CI/CD babysitter, your deploy watcher, your PR queue monitor. It&#8217;s built for the terminal, not the boardroom.</p><p>Both share one critical constraint you need to internalize before you build anything on top of them: <strong>the computer has to stay awake and the app has to be running.</strong> These are not cloud-scheduled jobs. There&#8217;s no server executing your tasks while your MacBook is in a bag at 35,000 feet. If your machine sleeps, your schedule sleeps. If you quit the app, the loop dies.</p><p>This isn&#8217;t a bug you need to work around &#8212; it&#8217;s a design constraint you need to plan around. Set your energy settings to prevent sleep when these workflows matter. Or run them on a machine that stays on. The executives who get the most out of Schedule have a designated machine &#8212; an old MacBook, a Mac Mini, something that never closes &#8212; where Claude runs continuously. That&#8217;s the move.</p><div><hr></div><h2>Step 2: Set Up Your First Scheduled Task in Cowork</h2><p>Open Claude Cowork and type <code>/schedule</code> in any conversation. Cowork will walk you through the configuration in natural language. You&#8217;ll set three things: the timing, the task, and the output.</p><p><strong>Timing.</strong> Cowork accepts plain English. &#8220;Every weekday at 7 AM.&#8221; &#8220;Every Monday at 9 AM.&#8221; &#8220;Daily at 6:30 AM.&#8221; Don&#8217;t overthink the syntax &#8212; it&#8217;s genuinely conversational. If it misreads your intent, it&#8217;ll confirm before saving.</p><p><strong>The task.</strong> This is your prompt. Write it like you&#8217;re writing for a future version of yourself who forgot everything about this project. Include the context you&#8217;d normally carry in your head. If you want competitive intel, don&#8217;t just say &#8220;check on competitors&#8221; &#8212; tell it which competitors, what signals matter, how you want the output framed. A good scheduled prompt is more explicit than a live prompt, because there&#8217;s no back-and-forth to course-correct.</p><p><strong>Output.</strong> Where does the result go? By default it surfaces in the conversation thread. If you&#8217;ve connected integrations, you can route output to email, Slack, or a document.</p><p>For Claude Code <code>/loop</code>, the setup is in the terminal. Navigate to your project directory, make sure you have a <code>/check-deploy</code> skill (or whatever command you want to loop) defined, and run:</p><pre><code><code>/loop 5m /check-deploy</code></code></pre><p>That&#8217;s it. Claude Code will execute <code>/check-deploy</code> every five minutes, print the result, and run again. Hit <code>Ctrl+C</code> to stop. You can loop any skill, any slash command, or any plain prompt.</p><p>The key discipline with <code>/loop</code>: keep the looped command tight and stateless. It should be designed to run in isolation, give you a clear status in a few lines, and complete cleanly. Don&#8217;t loop long analytical tasks &#8212; that&#8217;s what <code>/schedule</code> is for. Loop status checks, monitors, and watchers.</p><div><hr></div><h2>Step 3: Build a Daily Morning Briefing</h2><p>This is the first scheduled workflow worth building, because you&#8217;ll feel the value immediately and it trains you on what good scheduled prompts look like.</p><p>Here&#8217;s the prompt structure that works. In Cowork, schedule this for every weekday at 6:30 AM (or whenever you actually look at your phone before your first meeting):</p><pre><code><code>Good morning. Give me a daily briefing covering:

1. What's on my calendar today (if calendar is connected) &#8212; key meetings and prep I need to do
2. Any open threads or unresolved questions from our recent conversations
3. One thing I should be thinking about that I probably haven't looked at in the last 48 hours

Keep it under 200 words. Be direct. If there's nothing notable, say so.</code></code></pre><p>Adjust the three bullets to match your actual priorities. The point is specificity. &#8220;Daily briefing&#8221; without structure produces a vague summary. &#8220;These three specific things&#8221; produces something you can act on in 90 seconds.</p><p>After it runs for a week, you&#8217;ll start to notice what&#8217;s missing and what&#8217;s noise. That&#8217;s the signal to edit the prompt. Scheduled prompts are living documents &#8212; refine them the same way you&#8217;d refine any process that runs repeatedly.</p><p>What changes when this is working: you stop starting your day in reactive mode. You come into the morning with a frame already built, which means the first hour of work produces more than the next three would have without it.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-schedule">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Dispatch]]></title><description><![CDATA[Picture this: you&#8217;re in an airport, forty-five minutes from your flight, and you just remembered you need a full competitive analysis, a redlined contract summary, and a research brief ready before your 9 AM tomorrow.]]></description><link>https://www.gtmaipodcast.com/p/claude-dispatch</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-dispatch</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Wed, 25 Mar 2026 12:30:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c23a1664-b1c9-426f-9498-0af0e8f8673d_1156x627.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Picture this: you&#8217;re in an airport, forty-five minutes from your flight, and you just remembered you need a full competitive analysis, a redlined contract summary, and a research brief ready before your 9 AM tomorrow. Your laptop is in your bag. You could dig it out, find a power outlet, set up at a gate &#8212; or you could pull out your phone, fire off three instructions to Claude, and board the plane knowing the work is running on your desktop while you&#8217;re in the air.</p><p>That&#8217;s what Claude Dispatch makes possible. And it matters more than it looks like at first glance.</p><p>The premise is simple: your desktop is always on. You&#8217;re not always at it. Dispatch closes that gap. It&#8217;s a feature inside Claude Cowork &#8212; Anthropic&#8217;s desktop app &#8212; that lets you remote-control your Claude session from your phone. You send an instruction from mobile, your desktop Claude executes it. You don&#8217;t need to be sitting there. You just need to have left the app running.</p><p>Launched March 17, 2026, currently in research preview on Max and Pro plans. Here&#8217;s how to get it running and, more importantly, how to actually use it.</p><div><hr></div><h2>Step 1: What Dispatch Is and Why It Changes How You Work</h2><p>Most people treat AI tools as synchronous. You sit down, you prompt, you wait, you read, you iterate. Everything happens in the same session, at the same desk, in the same window of time. That model made sense when AI was a search replacement. It doesn&#8217;t make sense when AI can execute multi-hour research tasks, draft entire documents, or process large datasets without you doing anything.</p><p>The problem isn&#8217;t that Claude is too slow. It&#8217;s that you can&#8217;t always be at your desktop when you want to start work &#8212; and by the time you get there, you&#8217;ve lost the window to queue something meaningful before a meeting, before a flight, before the end of the day.</p><p>Dispatch solves the activation gap. The instruction you couldn&#8217;t send because you weren&#8217;t at your desk can now be sent from wherever you are. Your desktop &#8212; which is running anyway &#8212; picks it up and executes it.</p><p>The practical shift this creates: you stop thinking about AI tasks as things you do at your desk and start thinking about them as things you deploy. You&#8217;re in a cab, you remember you need a summary of last quarter&#8217;s earnings calls before the board meeting. You send it from your phone. By the time you&#8217;re through security, it&#8217;s done. You&#8217;re in a meeting that runs long, and you want Claude to start pulling together the market sizing you need for your next call. You excuse yourself, send the instruction from the hallway, and it&#8217;s running before you&#8217;re back in the room.</p><p>This is the remote work angle that most coverage of Dispatch misses. It&#8217;s not just for people working from home or on the road &#8212; it&#8217;s for anyone whose best thinking about what they need happens when they&#8217;re away from their keyboard. Which is most people, most of the time.</p><p>The only hard requirement: your desktop must be running Claude Cowork and connected to the internet when the instruction arrives. Close your laptop and you&#8217;ve closed the connection. Leave it running and the desktop becomes a persistent compute resource you can trigger from anywhere.</p><div><hr></div><h2>Step 2: Setup &#8212; Enabling Dispatch and Linking Your Mobile</h2><p>You need Claude Cowork installed on your desktop and the Claude mobile app on your phone. Both need to be signed into the same Anthropic account on a Max or Pro plan. That&#8217;s the full requirements list.</p><p><strong>On desktop:</strong></p><p>Open Claude Cowork. Go to Settings. Under the Cowork section, find Dispatch &#8212; it&#8217;ll be labeled as a research preview feature. Toggle it on. You&#8217;ll see a confirmation that your desktop session is now reachable from your mobile device when this machine is running and connected.</p><p>That&#8217;s the full desktop setup. No API keys, no configuration files, no webhooks to configure. Cowork handles the connection layer.</p><p><strong>On mobile:</strong></p><p>Open the Claude app on your iPhone or Android device. In the bottom nav or the menu (varies slightly by platform), look for the Dispatch icon &#8212; it appears after you&#8217;ve enabled it on desktop. Tap it. You&#8217;ll see the connection status: either your desktop is online and reachable, or it isn&#8217;t.</p><p>If your desktop is running and connected, the status shows green. If you turned off your laptop without leaving Cowork running, it&#8217;ll show as offline.</p><p><strong>Permissions to know about:</strong></p><p>Dispatch respects the same permissions and tool access your desktop Claude session has. If you&#8217;ve connected Claude Code, file system access, or any MCP integrations to your Cowork session, those tools are available to instructions you send via Dispatch. If you haven&#8217;t, Dispatch can still execute any task that doesn&#8217;t require local file access &#8212; research, drafting, analysis, synthesis.</p><p>One thing worth knowing upfront: Dispatch sends your instruction to your existing desktop session. It&#8217;s not spawning a new session &#8212; it&#8217;s sending a message to the Claude instance already running on your machine. If you had an active conversation open when you left your desk, Dispatch picks up in that context unless you specify otherwise. If you want a fresh task to run without prior context, note that in your instruction.</p><div><hr></div><h2>Step 3: Your First Remote Instruction</h2><p>Before building out power workflows, run one task end-to-end. This is the step that makes Dispatch real rather than theoretical.</p><p>Leave your desktop running with Claude Cowork open. Walk away from it &#8212; go to another room, or step outside. Pull out your phone and open the Claude app.</p><p>Tap into Dispatch. Confirm your desktop shows as online.</p><p>Send this instruction:</p><blockquote><p>Summarize the current competitive landscape for [your industry]. I need: the three most active competitors right now, what each one is doing that&#8217;s working, and the one move any of them could make in the next six months that would be a real threat to us. We are [one sentence description of your company]. Use publicly available information. Have this ready when I get back to my desk.</p></blockquote><p>Send it. Put your phone away. Go back to your desk in ten or fifteen minutes.</p><p>What you&#8217;ll find: Claude executed the task against your instruction, in your desktop session, without you sitting there. The output is waiting for you.</p><p>This is the moment that reframes how you use the tool. The instruction you couldn&#8217;t send because you weren&#8217;t at your keyboard can now be sent from anywhere. The gap between &#8220;I need this&#8221; and &#8220;I&#8217;m at my desk to start it&#8221; disappears.</p><p>Run this once on something real. The mechanics become obvious in one use.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-dispatch">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Channels]]></title><description><![CDATA[It&#8217;s 2am. Your staging deployment just failed. GitHub Actions sent a notification, Slack has three messages from the on-call engineer asking what happened, and nobody&#8217;s looking at any of it because everyone is asleep.]]></description><link>https://www.gtmaipodcast.com/p/claude-channels</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-channels</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Tue, 24 Mar 2026 12:29:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/963fc042-dad8-4739-8586-b4be0500a96d_2048x1226.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s 2am. Your staging deployment just failed. GitHub Actions sent a notification, Slack has three messages from the on-call engineer asking what happened, and nobody&#8217;s looking at any of it because everyone is asleep.</p><p>Old world: you find out in the morning, do triage over coffee, and the fix ships mid-afternoon. The pipeline was broken for twelve hours before anyone touched it.</p><p>New world with Claude Channels: the failure hits Telegram, Claude Code wakes up, investigates the logs, traces the error, pushes a fix, runs the build again, and messages you back with a three-sentence summary of what happened and what it did. You wake up to a resolved incident instead of an open one.</p><p>That&#8217;s not a demo scenario. That&#8217;s the literal capability Anthropic shipped on March 20, 2026 &#8212; and it fundamentally changes what &#8220;agentic&#8221; actually means in practice.</p><div><hr></div><h2>Step 1: What Channels Is (and Why the Reactive vs. Proactive Problem Matters)</h2><p>Every AI workflow until now has been pull-based. You open a session, describe what you need, Claude does the work, you close the session. It&#8217;s powerful &#8212; but it&#8217;s still you initiating every single action. Claude sits idle until you show up.</p><p>Channels flips that model. It lets external events &#8212; a CI failure, a GitHub notification, a Telegram message from a teammate, a Discord mention in your dev server &#8212; push directly into a running Claude Code session as triggers for autonomous action. Claude isn&#8217;t waiting for you to ask. It&#8217;s listening for events, and when they arrive, it works.</p><p>The technical implementation is MCP. Channels are MCP servers &#8212; specifically, servers that connect Claude Code to messaging infrastructure. Telegram and Discord are the launch integrations, but the protocol is open, which means the surface area will expand fast.</p><p>What this unlocks is a different category of automation. Not &#8220;Claude helps me do a thing faster&#8221; but &#8220;Claude handles a class of things without me.&#8221; The reactive vs. proactive distinction sounds academic until you experience the difference in your actual operational cadence. Then it&#8217;s obvious.</p><p><strong>What you need to get started:</strong></p><ul><li><p>Claude Code v2.1.80 or later (check with <code>claude --version</code>)</p></li><li><p>Bun runtime installed (<code>curl -fsSL https://bun.sh/install | bash</code>)</p></li><li><p>A claude.ai account (login required &#8212; Channels uses your claude.ai identity for the MCP connection)</p></li><li><p>Either a Telegram bot token or Discord bot token depending on which channel you&#8217;re connecting</p></li></ul><div><hr></div><h2>Step 2: Setup &#8212; Installing the MCP Server and Configuring Your Channel</h2><p>The Channels MCP server is what bridges your messaging platform to Claude Code. You&#8217;re adding it the same way you&#8217;d add any MCP server &#8212; a JSON configuration block that tells Claude Code where to find it and how to run it.</p><p><strong>Step 1: Install the Channels server via Bun</strong></p><pre><code><code>bunx @anthropic-ai/claude-channels install</code></code></pre><p>This installs the server binary and sets up the local runtime. Bun handles the dependency resolution &#8212; the reason Bun is required over Node is startup speed. Channels needs to respond to inbound events fast, and Bun&#8217;s cold-start time is roughly 4x faster than Node for this workload.</p><p><strong>Step 2: Add the MCP configuration to Claude Code</strong></p><p>Open or create <code>~/.claude/claude_desktop_config.json</code> and add the following under <code>mcpServers</code>:</p><pre><code><code>{
  "mcpServers": {
    "channels": {
      "command": "bunx",
      "args": ["@anthropic-ai/claude-channels", "serve"],
      "env": {
        "CLAUDE_CHANNELS_AUTH": "your-claude-ai-token"
      }
    }
  }
}</code></code></pre><p>Your <code>CLAUDE_CHANNELS_AUTH</code> token is available in your claude.ai account settings under Developer &#8594; API Tokens. Generate one scoped to Channels if you want to limit surface area.</p><p><strong>Step 3: Configure your first messaging platform</strong></p><p>For Telegram, you need a bot token. Get one from BotFather &#8212; open Telegram, search <code>@BotFather</code>, send <code>/newbot</code>, follow the prompts. Copy the token it gives you.</p><pre><code><code>claude channels add telegram --token YOUR_BOT_TOKEN --name "claude-ops"</code></code></pre><p>For Discord, you need a bot token from the Discord Developer Portal (discord.com/developers/applications). Create an application, add a bot, copy the token.</p><pre><code><code>claude channels add discord --token YOUR_BOT_TOKEN --guild YOUR_SERVER_ID --name "claude-ops"</code></code></pre><p><strong>Step 4: Restart Claude Code</strong></p><pre><code><code>claude restart</code></code></pre><p>On startup, Claude Code will initialize the Channels server and establish the connection to your configured platform. You&#8217;ll see a confirmation in the terminal output: <code>Channels: connected (telegram: claude-ops)</code>.</p><div><hr></div><h2>Step 3: Your First Channel &#8212; Send a Message, Watch Claude React</h2><p>With the server running and Telegram connected, you have a live two-way bridge between your phone and a Claude Code session.</p><p>Open Telegram. Find the bot you just created &#8212; search by the username you gave it during BotFather setup. Send it a message:</p><pre><code><code>@yourbot what's the current state of my project?</code></code></pre><p>Watch your terminal. Claude Code receives the message as an injected prompt, runs against your current working directory, and responds &#8212; both in the terminal and back to you in Telegram.</p><p>That first response will feel underwhelming if you ask something generic. The capability reveals itself when you send it something with real operational context.</p><p>Try this instead:</p><pre><code><code>@yourbot run git log --oneline -10 and tell me what's been committed in the last 48 hours</code></code></pre><p>Claude executes the command, reads the output, and sends you a clean summary in Telegram. You just ran a terminal command from your phone via a chat message. That&#8217;s the primitive. Everything else in this guide builds on it.</p><p>A few things to understand about how the two-way messaging works before you go further:</p><p>Claude Code sends messages back through the same channel that triggered it. If the message came from Telegram, the response goes to Telegram. If it came from Discord, it goes to Discord. The session stays alive between triggers &#8212; Claude maintains context across multiple messages in the same channel thread, which means you can have a real back-and-forth from your phone without typing a single character in a terminal.</p><p>Message length is capped at Telegram&#8217;s standard 4096 characters per message. For longer outputs &#8212; log analysis, full reports &#8212; Claude will chunk the response automatically. You can override this with an instruction in your trigger message: &#8220;respond in under 200 words&#8221; keeps things clean for quick status checks.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-channels">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Memory]]></title><description><![CDATA[You&#8217;ve been in the middle of a project for three weeks.]]></description><link>https://www.gtmaipodcast.com/p/claude-memory</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-memory</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Mon, 23 Mar 2026 12:26:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e1f1c4f4-bf91-4e92-96c0-54176030d8d2_1613x1080.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You&#8217;ve been in the middle of a project for three weeks. Claude knows your ICP, your company&#8217;s positioning, your writing voice, the competitive context, the particular angle you&#8217;ve been developing for the past month. The work is going well. Then you open a new conversation and type &#8220;Hi&#8221; &#8212; and it&#8217;s gone. All of it. You&#8217;re back to a blank slate.</p><p>Most people accept this as the cost of the tool. It&#8217;s not. It&#8217;s a design problem, and it has a design solution.</p><p>Claude&#8217;s memory isn&#8217;t missing &#8212; it&#8217;s distributed across five different mechanisms with completely different persistence characteristics. Once you understand which layer does what, you stop losing context and start building memory that compounds. This guide is the map.</p><div><hr></div><h2>Step 1: How Claude&#8217;s Memory Actually Works &#8212; The Five Layers</h2><p>Most people interact with exactly one layer of Claude&#8217;s memory: in-conversation context. The rest are either unknown or underused, which means they&#8217;re leaving most of the infrastructure untouched.</p><p>Here&#8217;s the full picture.</p><p><strong>Layer 1: In-Conversation Context</strong></p><p>Everything in your current session. The model processes everything in the context window &#8212; every message, every file you&#8217;ve pasted, every response it&#8217;s generated &#8212; and uses it when responding. This is why Claude can refer back to something you said 40 messages ago in the same conversation.</p><p>The catch: when the conversation ends, this layer is gone. Not archived somewhere retrievable. Gone. Claude cannot access a previous conversation&#8217;s content when you start a new one. This is the layer most people treat as the only layer, which is why they spend the first five minutes of every conversation re-briefing an AI that has no idea who they are.</p><p><strong>Layer 2: Projects Memory (Cowork)</strong></p><p>Available on Claude.ai Pro and Team plans. Projects give you a persistent context layer that loads automatically at the start of every conversation inside that project. You write Project Instructions once &#8212; who you are, what you&#8217;re working on, what constraints apply &#8212; and every subsequent conversation inherits that context.</p><p>The important nuance: this is not transcript memory. Claude doesn&#8217;t read your previous conversations before responding in a new one. It has your instructions and your uploaded files, not a running log of what you&#8217;ve discussed. This is a crucial distinction when you&#8217;re deciding what to put there.</p><p><strong>Layer 3: CLAUDE.md / Instruction Files</strong></p><p>The most reliable memory mechanism available if you&#8217;re using Claude Code. CLAUDE.md is a markdown file in your project root that Claude Code reads at every session start. It&#8217;s not a prompt &#8212; it&#8217;s persistent instruction infrastructure. Changes you make to CLAUDE.md are available in every future session without any setup.</p><p>This is where system-level behavior lives: how you&#8217;ve structured your project, what agents you&#8217;re using and why, how work should be routed, what constraints apply across the entire environment. In a well-configured Claude Code setup, CLAUDE.md is the brain that orients every session.</p><p><strong>Layer 4: Memory Files</strong></p><p>Explicitly written markdown files Claude reads as part of its session orientation. The MEMORY.md pattern &#8212; writing structured context into a file that gets surfaced at session start &#8212; is the closest thing to persistent episodic memory Claude Code has. You write down what Claude needs to remember: key decisions made, current state of active projects, important context that would take 10 minutes to re-establish from scratch.</p><p>The difference between Layer 3 and Layer 4: CLAUDE.md holds behavioral instructions (how to work), MEMORY.md holds factual context (what&#8217;s been done, what&#8217;s true). Both load at session start. Both persist across sessions. Together, they close the gap between &#8220;AI that starts fresh every time&#8221; and &#8220;AI that picks up where you left off.&#8221;</p><p><strong>Layer 5: MCP-Based Memory</strong></p><p>External memory stores connected to Claude via the Model Context Protocol. Vector databases, knowledge graphs, retrieval systems &#8212; any structured store that can receive a query and return relevant context. This layer enables semantic memory: Claude can search your accumulated knowledge by meaning, not just by keyword, and retrieve relevant context dynamically instead of dumping everything into the context window at once.</p><p>This is the most powerful layer and the most complex to set up. It&#8217;s the right tool when your memory store has grown beyond what fits in a context window, when you need semantic retrieval across hundreds of notes or documents, or when you&#8217;re building a system where accumulated insights need to compound across a team.</p><p>The summary: Layer 1 handles your current session. Layers 2-4 handle what persists across sessions through structured files. Layer 5 handles what scales beyond files. Most people only use Layer 1. The good setup uses all five intentionally.</p><div><hr></div><h2>Step 2: Projects Memory &#8212; Setting Up Persistent Context in Cowork</h2><p>The first thing to understand about Projects is what they&#8217;re not: they&#8217;re not a memory system that learns from your conversations. They&#8217;re a context injection system that ensures every conversation starts from an informed baseline. The distinction matters because it changes what you put there.</p><p>Projects Instructions should contain context that is true and stable across many conversations: who you are, what you&#8217;re working on, your ICP, your voice, your constraints, your company&#8217;s competitive position. Not your current active tasks. Not the status of a specific deal. Not what you discussed with a prospect last Tuesday. Stable context, not dynamic state.</p><p>Here&#8217;s the test: if the information would still be true six months from now, it belongs in Project Instructions. If it&#8217;s the current status of something that changes week to week, it belongs in a specific conversation.</p><p><strong>What to write in Project Instructions:</strong></p><p>Start with four blocks in this order.</p><p>First block &#8212; who you are and what this project is for:</p><pre><code><code>You are a [role] assistant for [Name], [Title] at [Company].
[Company] is [one-sentence description].
This project is for [specific type of work &#8212; writing, GTM strategy, client work, etc.].</code></code></pre><p>Second block &#8212; domain knowledge Claude needs to give specific rather than generic advice:</p><pre><code><code>ICP: [Specific description &#8212; industry, size, role, trigger events, not just "SMB"]
Primary competitors: [Names and the one key differentiator against each]
Value props ranked by ICP priority: [Specific outcomes, not category claims]
Sales motion: [How you actually sell &#8212; PLG, outbound, channel, hybrid]</code></code></pre><p>Third block &#8212; output requirements:</p><pre><code><code>Voice: [Specific constraints &#8212; first person, short paragraphs, no passive voice]
Format: [What a good response looks like &#8212; length, structure, when to use headers]
When to push back: [Where you want Claude to challenge you, not just comply]</code></code></pre><p>Fourth block &#8212; what NOT to do:</p><pre><code><code>Do not: [Re-summarize what I just said. Add caveats I didn't ask for.
Give generic advice when specific context is available in uploaded files.
Use passive voice, jargon, or hedged language.]</code></code></pre><p>That fourth block is the one most people skip. Instructions that only tell Claude what to do don&#8217;t prevent the default behaviors that frustrate you. Constraints are what make instructions actually change behavior.</p><p><strong>What to upload to a Project:</strong></p><p>Upload your reference materials &#8212; documents Claude would otherwise need you to paste every time. Brand guide. ICP definition. Competitor battlecards. Past content samples (especially important for writing projects &#8212; three to five of your best pieces teach voice better than any description). Case studies. Pricing structure. Product documentation.</p><p>The upload is permanent context. The paste is per-conversation context. Move anything you paste in more than three times to a file and upload it.</p><div><hr></div><h2>Step 3: CLAUDE.md &#8212; The Most Reliable Memory Mechanism in Claude Code</h2><p>If you&#8217;re using Claude Code and you don&#8217;t have a CLAUDE.md, you&#8217;re running the most powerful version of the tool without its most fundamental memory infrastructure.</p><p>CLAUDE.md is read at the start of every Claude Code session. Not sometimes. Every time. It&#8217;s the one memory mechanism with zero maintenance overhead &#8212; write it once, and it loads in perpetuity. This makes it uniquely reliable compared to every other layer.</p><p><strong>What belongs in CLAUDE.md:</strong></p><p>The behavioral architecture of your entire working environment. Not task lists. Not current status. The stable structure of how you work.</p><p>Three categories to cover:</p><p><em>How work gets routed.</em> If you&#8217;ve built specialized agents, CLAUDE.md is where you define the routing rules. Which agent handles which task type, how to classify ambiguous requests, what the tiers of routing complexity look like. Without this, Claude defaults to doing everything itself, which means it ignores the specialized agents you built.</p><p><em>How the vault and file system are organized.</em> Where notes live. Where inbox items go. Where output goes. What the processing pipeline looks like (inbox &#8594; distill &#8594; notes, not directly to notes). If the architecture is documented in CLAUDE.md, Claude can navigate and maintain it correctly without you re-explaining it.</p><p><em>What the constraints and guardrails are.</em> Things that should never happen regardless of what&#8217;s requested: don&#8217;t write directly to notes without processing, don&#8217;t commit sensitive files, don&#8217;t skip quality gates. Hard constraints belong in CLAUDE.md because they apply to every session.</p><p>What doesn&#8217;t belong: anything that changes frequently. The current status of a project. Your active tasks. What you worked on last session. That&#8217;s what memory files are for.</p><p><strong>The principle:</strong> CLAUDE.md holds behavior. Memory files hold state. The distinction keeps CLAUDE.md clean and stable while allowing your actual working context to evolve without cluttering your behavioral instructions.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-memory">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Agents: Multi-Agent Workflows Without Code]]></title><description><![CDATA[Most people use Claude the way they used Google in 2005 &#8212; one question, one answer, next tab.]]></description><link>https://www.gtmaipodcast.com/p/claude-agents-multi-agent-workflows</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-agents-multi-agent-workflows</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 21:03:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ff3100d4-0475-4e1d-9502-2da5bb017847_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people use Claude the way they used Google in 2005 &#8212; one question, one answer, next tab. Sequential. One thing at a time. The same mental model that made sense for a search box is slowing you down with a tool that can run ten things at once.</p><p>Claude agents change what&#8217;s possible. Not because they&#8217;re technically impressive &#8212; because they restructure how work actually gets done. The difference between researching five competitors one at a time over an afternoon and having five parallel conversations that all finish in fifteen minutes is not incremental. It&#8217;s a different category of output. Same effort, different throughput.</p><p>You don&#8217;t need to write a single line of code to do any of this.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ul><li><p>An understanding of how Claude agents actually work &#8212; and why sequential use is leaving throughput on the table</p></li><li><p>Your first parallel research project running across multiple Claude conversations simultaneously</p></li><li><p>The orchestrator + worker pattern for complex, multi-part projects</p></li><li><p>Role specialization techniques that produce better output by giving each conversation a specific job</p></li><li><p>Background agent workflows for Claude Code users who want to run projects while doing other work</p></li></ul><div><hr></div><h2>Step 1: What Claude Agents Actually Are</h2><p>An agent, in Claude&#8217;s context, is a Claude instance with a specific role and task running autonomously. Not a different product. Not a plugin. The same Claude &#8212; given a defined job, enough context to do it, and the instruction to complete it and report back.</p><p>Here&#8217;s the insight that changes how you work: Claude Code &#8212; Anthropic&#8217;s CLI tool &#8212; can spawn multiple agents simultaneously. Each one works on a different piece of a larger project at the same time, then reports back to a coordinating conversation that synthesizes the outputs. A project that would take you three hours of sequential Claude sessions takes forty-five minutes because the work runs in parallel.</p><p>But you don&#8217;t need Claude Code to apply agent thinking. In Claude.ai, you can run multiple browser tabs &#8212; each one a separate Claude conversation with a distinct role and task. It&#8217;s the manual version of the same pattern. Same principle. Available to any Claude Pro subscriber today.</p><p>The core insight is this: most knowledge work isn&#8217;t sequential. The five competitor research briefs you need for Monday&#8217;s strategy meeting aren&#8217;t dependent on each other. The three draft angles you&#8217;re evaluating for a content piece can be written at the same time. The four sections of a long document you need reviewed don&#8217;t have to wait in line. Once you see which tasks are genuinely interdependent and which ones just feel sequential because you&#8217;ve been doing them one at a time, the pattern becomes obvious.</p><p>Sequential vs. parallel isn&#8217;t about being faster at the same work. It&#8217;s about doing fundamentally more work in the same window of time.</p><div><hr></div><h2>Step 2: Your First Parallel Workflow</h2><p>Open Claude.ai. Open four browser tabs &#8212; each one a new Claude conversation.</p><p>You&#8217;re going to run a competitive research project. The goal is a research brief on four competitors, all finished at the same time rather than one after another.</p><p>In each tab, paste this prompt &#8212; swapping the competitor name:</p><pre><code><code>You are a competitive intelligence analyst. Research [Competitor Name] and give me:

(1) Their core product offering &#8212; what they do and for whom.
(2) Their primary positioning &#8212; how they describe themselves, what problem they claim to solve.
(3) Their pricing model &#8212; how they charge (if publicly available).
(4) Their most obvious strengths &#8212; what they do well based on reviews, case studies, or public evidence.
(5) Their most obvious weaknesses &#8212; where customers complain, where they're thin, where they overstate.
(6) One thing about this competitor that most people in my industry underestimate.

Be specific. Cite what you're observing from publicly available information. Don't summarize &#8212; give me the analysis.

My company does [brief description of what you do]. Frame the competitor assessment relative to us.</code></code></pre><p>Run all four tabs simultaneously. You&#8217;re not watching them in sequence &#8212; you&#8217;ve started them all and you&#8217;re doing something else while they run.</p><p>When they finish, open a fifth tab. Paste all four outputs and run this:</p><pre><code><code>I have four competitor research briefs. Synthesize them into one competitive landscape summary:

(1) Where are these competitors positioned relative to each other &#8212; what's the map?
(2) Where is there an uncontested or underserved space in this landscape?
(3) Which competitor should we worry about most and why?
(4) Given my company's positioning [describe it], where do we have the clearest angle of attack?</code></code></pre><p>That fifth conversation is the orchestrator. The four research conversations were the workers. The synthesis is the output you actually use.</p><p>Total time: fifteen to twenty minutes. The same work done sequentially would be sixty to ninety minutes with the usual momentum loss between sessions.</p><div><hr></div><h2>Step 3: The First Result</h2><p>Here&#8217;s what changes when you run this for the first time.</p><p>The parallel output isn&#8217;t just faster &#8212; it&#8217;s structurally different from sequential output. When you research competitors one at a time over an hour, your framing shifts as you go. By the time you get to competitor four, you&#8217;re unconsciously filtering what you notice based on what you already found in the first three. The analysis is path-dependent in a way that introduces bias you don&#8217;t notice.</p><p>Four simultaneous conversations run with the same prompt and the same framing. The starting point is identical. The synthesis conversation gets genuinely comparable inputs &#8212; four analyses built from the same brief, not from four sessions of incrementally shifting context.</p><p>This is what makes the orchestrator conversation useful rather than just convenient. When the fifth conversation synthesizes four consistent briefs, the gaps it surfaces are real gaps &#8212; not artifacts of the order you happened to research things in.</p><p>One pattern I&#8217;ve seen consistently: the synthesis step surfaces a competitive insight that none of the individual briefs flagged, because the insight only exists in the comparison. Competitor A and Competitor C are both going after the same adjacent segment. That&#8217;s not visible in either brief alone. It shows up immediately when someone reads all four at once.</p><p>Run this pattern once on a real project. The structural advantage becomes obvious in the first use.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-agents-multi-agent-workflows">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Skills: Custom Commands That Do More]]></title><description><![CDATA[Most people using Claude Code are doing it wrong.]]></description><link>https://www.gtmaipodcast.com/p/claude-skills-custom-commands-that</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-skills-custom-commands-that</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 21:01:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/43d950cb-cdbb-4071-86f1-c137a0717cb2_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people using Claude Code are doing it wrong.</p><p>Not wrong as in incorrect &#8212; wrong as in slow. They&#8217;re typing the same complex prompts over and over, hoping they remember the exact framing that worked last time, and losing 10&#8211;15 minutes per workflow because there&#8217;s no repeatability built into how they work.</p><p>Skills fix that. They&#8217;re the feature most Claude Code users don&#8217;t know exists &#8212; and once you build a few, you won&#8217;t work without them.</p><p>This guide walks you through exactly what Skills are, how to use the ones that ship with Claude Code, and how to build your own custom workflows triggered by a single slash command.</p><div><hr></div><h2>Step 1: What Skills Actually Are</h2><p>A Skill is a markdown file that defines a reusable workflow. Store it in <code>.claude/skills/</code> inside your project (or globally), and it becomes a slash command you can invoke from anywhere in Claude Code.</p><p>Type <code>/commit</code> and Claude doesn&#8217;t just write a commit message &#8212; it follows a specific workflow: checks staged changes, reads recent commit history to match your style, drafts a message, and creates the commit. That entire workflow is defined in a Skill file.</p><p>Here&#8217;s what a Skill file looks like:</p><pre><code><code>---
name: review
description: Check content against house style guide and flag issues
---

You are reviewing content against the following style guide rules:
[your style guide here]

When invoked, ask for the content to review if not already provided.
Then check against each rule and return a structured report:
what passes, what flags, and specific suggested edits for each flag.</code></code></pre><p>That&#8217;s it. A YAML header with a name and description, followed by whatever instructions you want Claude to execute. When you type <code>/review</code> in Claude Code, Claude loads those instructions and runs the workflow.</p><p>The <code>.claude/skills/</code> directory is just a folder. Files in it become commands. The command name is the filename without the <code>.md</code> extension &#8212; so <code>review.md</code> becomes <code>/review</code>.</p><p>Skills can include:</p><ul><li><p>Multi-step workflow instructions</p></li><li><p>Specific output formats Claude should produce</p></li><li><p>Domain knowledge and context Claude should carry into the task</p></li><li><p>Tool use patterns (read these files, run these commands, check this)</p></li><li><p>Conditional logic &#8212; &#8220;if the content type is X, apply format Y&#8221;</p></li><li><p>Parameters &#8212; <code>/competitive-brief Salesforce</code> passes &#8220;Salesforce&#8221; as an argument to the skill</p></li></ul><p>They&#8217;re not macros. They&#8217;re not shortcuts. They&#8217;re saved playbooks &#8212; the same level of precision you&#8217;d give a new hire walking through a task for the first time, available on demand, every time.</p><div><hr></div><h2>Step 2: Your First Skill (Built-In)</h2><p>Claude Code ships with several built-in Skills. The most useful one to understand first is <code>/commit</code> &#8212; not because it&#8217;s the most impressive, but because it demonstrates exactly what makes Skills valuable.</p><p>Before Skills: you&#8217;d manually stage your files, think about what changed, write a commit message that may or may not match your project&#8217;s conventions, and hope it was good enough.</p><p>With <code>/commit</code>: you type one command. Claude checks what&#8217;s staged, reads your recent commit history to match the style, drafts a message, and executes. The whole thing takes about 10 seconds instead of 2 minutes.</p><p>Try it now. Make a change to any file in a project, stage it with <code>git add</code>, then type <code>/commit</code> in Claude Code.</p><p>Watch what happens. Claude doesn&#8217;t just string words together &#8212; it follows the workflow defined in its Skills file. It checks <code>git status</code>, runs <code>git diff</code>, reads <code>git log</code> to see how you&#8217;ve written messages before, then drafts something consistent with your history.</p><p>That&#8217;s a Skill doing its job.</p><div><hr></div><h2>Step 3: Understanding the Mental Model</h2><p>Here&#8217;s the shift that makes Skills click: stop thinking about prompts and start thinking about workflows.</p><p>Every repetitive task you do in Claude Code has a shape. A content review has the same steps every time: load the content, check against the rules, flag the issues, suggest edits. A competitive brief has the same structure every time: company overview, product positioning, pricing, strengths, weaknesses, how to beat them in a deal. A weekly summary pulls from the same sources every time: commit history, notes, decisions made.</p><p>The problem isn&#8217;t that Claude can&#8217;t do these things. The problem is that every time you describe the workflow from scratch, you&#8217;re introducing variability. You&#8217;ll forget a step. You&#8217;ll frame it slightly differently. You&#8217;ll get a slightly different result.</p><p>A Skill locks the workflow down. The instructions are the same every time. The output format is the same every time. The quality floor doesn&#8217;t move.</p><p>Think of it this way: you don&#8217;t explain to a good employee how to do the same task from scratch each time they do it. You document the process once, they follow it reliably. Skills are that documentation &#8212; except Claude executes it automatically every time you call it.</p><p>Built-in Skills give you the pattern. Custom Skills give you the leverage.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-skills-custom-commands-that">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude and MCP: Connecting Claude to Your Tools]]></title><description><![CDATA[Most people are using Claude wrong.]]></description><link>https://www.gtmaipodcast.com/p/claude-and-mcp-connecting-claude</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-and-mcp-connecting-claude</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 21:00:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1b709203-8627-4a42-bce9-ab575e5115cd_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people are using Claude wrong.</p><p>Not because they&#8217;re writing bad prompts. Because they&#8217;re spending half the conversation copying and pasting content that already lives somewhere &#8212; a Google Doc, a Notion page, a GitHub repo &#8212; just so Claude can read it.</p><p>MCP fixes that. And almost nobody outside of developer circles knows it exists.</p><p>Here&#8217;s what changes once you understand it: Claude stops being a tool you feed information and starts being a tool that goes and gets it.</p><div><hr></div><h2>Step 1: What MCP Actually Is (and Why It&#8217;s Not a Developer Thing)</h2><p>MCP stands for Model Context Protocol. Anthropic released it as an open standard so that Claude &#8212; and any other AI &#8212; can connect to external tools and data sources in a consistent, predictable way.</p><p>Without MCP: Claude only knows what you paste into the chat window. Period.</p><p>With MCP: Claude can read a file from your Google Drive, pull up a Notion page, check the status of a GitHub issue, fetch a webpage &#8212; all without you touching a single thing. You just ask.</p><p>The connectors that make this work are called <strong>MCP servers</strong>. Each one handles a specific tool. Google Drive has an MCP server. Notion has one. GitHub has one. Slack has one. There are hundreds of them built already, and more appear weekly.</p><p>The part nobody tells business professionals: Claude.ai has several MCP integrations built directly into the interface. They call them <strong>Connectors</strong>. Zero technical setup. You connect your account, and Claude can access your data.</p><p>That&#8217;s where we&#8217;re starting &#8212; no command line, no config files, no developer required.</p><p><strong>What you need before Step 2:</strong></p><ul><li><p>A Claude.ai account (Pro or Team tier &#8212; Connectors are not available on the free plan)</p></li><li><p>Access to at least one of: Google Drive, Notion, GitHub</p></li></ul><div><hr></div><h2>Step 2: Connect Your First Tool (Google Drive or Notion)</h2><p>Go to Claude.ai. In the left sidebar, look for <strong>Integrations</strong> or the connector settings &#8212; the exact label has shifted in recent UI updates, but it&#8217;s in the main navigation. As of early 2026, it lives under your account settings or the project settings panel.</p><p><strong>If you&#8217;re using Google Drive:</strong></p><ol><li><p>Click the Google Drive connector</p></li><li><p>Authenticate with your Google account &#8212; standard OAuth flow, same as connecting any app</p></li><li><p>Grant the requested permissions (read access to Drive files)</p></li><li><p>That&#8217;s it</p></li></ol><p><strong>If you&#8217;re using Notion:</strong></p><ol><li><p>Click the Notion connector</p></li><li><p>Authenticate with your Notion workspace</p></li><li><p>Select which pages or databases Claude can access &#8212; I&#8217;d recommend starting broad and narrowing later if privacy is a concern</p></li><li><p>Done</p></li></ol><p>Once connected, Claude can see those files. You don&#8217;t have to paste anything. You just reference them in your prompt.</p><p>One important thing to understand: Claude doesn&#8217;t browse your entire Drive proactively. It reads what you point it to. The connection enables access &#8212; you still direct it.</p><div><hr></div><h2>Step 3: Get Claude Reading One of Your Files</h2><p>This is where it clicks.</p><p>Open a new conversation in Claude.ai. Make sure you&#8217;re in a project that has your connector enabled (if you set it up at the project level) or that you&#8217;ve enabled it at the account level.</p><p>Try one of these prompts exactly as written &#8212; just substitute your actual file or page name:</p><p><strong>Google Drive:</strong></p><blockquote><p>&#8220;Read my file called [exact file name] in Google Drive and give me a one-paragraph summary of the main argument.&#8221;</p></blockquote><p><strong>Notion:</strong></p><blockquote><p>&#8220;Pull up my Notion page called [page title] and tell me the three most important decisions documented there.&#8221;</p></blockquote><p>Watch what happens. Claude fetches the file, reads it, and responds &#8212; without you touching a single thing.</p><p>If you get an error, the most common fix is re-authenticating the connector. Occasionally the OAuth token expires and needs a refresh. Go back to connector settings, disconnect, reconnect.</p><p>Once that works, you&#8217;ve crossed the threshold. That&#8217;s MCP. That&#8217;s what the rest of this guide builds on.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-and-mcp-connecting-claude">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Projects]]></title><description><![CDATA[Most people are using Claude the same way they use a search engine &#8212; new tab, new question, repeat.]]></description><link>https://www.gtmaipodcast.com/p/claude-projects</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-projects</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:56:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4683a0b3-8516-4fa2-ac27-8ce717afc923_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people are using Claude the same way they use a search engine &#8212; new tab, new question, repeat. They retype who they are and what they&#8217;re working on at the start of every conversation. They re-paste the brand guidelines every time they need copy reviewed. They re-explain their ICP every time they ask for help with a sales email. They&#8217;re treating a 200,000-token context window like a sticky note.</p><p>Projects fix this. One setup, persistent context, every conversation in that project starts informed. It&#8217;s the difference between having an assistant who knows your business and having a stranger who needs a full briefing every single time.</p><p>This guide is not about what to use Claude for &#8212; there are separate guides for that. This is about how to set up the infrastructure that makes Claude consistently useful instead of occasionally useful.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ul><li><p>A working Project with custom instructions that shape every conversation without you having to set the stage</p></li><li><p>Uploaded files Claude can pull from every time &#8212; your brand guide, product docs, ICP definition, competitor intel</p></li><li><p>An organized conversation structure within the project so different workstreams don&#8217;t bleed into each other</p></li><li><p>By the end of this guide, four concrete Project setups for the highest-value use cases</p></li></ul><div><hr></div><h2>Step 1: What Claude Projects Are (and What They&#8217;re Not)</h2><p>Every new Claude conversation normally starts from zero. The model has no memory of what you discussed yesterday, last week, or five minutes ago in a different tab. That&#8217;s not a bug &#8212; it&#8217;s how the model works. But it creates friction when you&#8217;re doing recurring work and the same context needs to be present every time.</p><p>Projects solve this problem with three components:</p><p><strong>Project Instructions</strong> &#8212; a custom system prompt that runs in every conversation inside the project. This is where you define who you are, what you&#8217;re working on, how you want Claude to behave, and what constraints apply. Every conversation in the project inherits this context automatically.</p><p><strong>Uploaded files</strong> &#8212; documents, PDFs, and text files that Claude can reference in any conversation in the project. Brand guidelines, product documentation, competitor battlecards, pricing sheets, style guides, ICP definitions &#8212; upload them once and they&#8217;re always there.</p><p><strong>Conversation history</strong> &#8212; all conversations within the project are stored together. Claude cannot automatically read a previous conversation while you&#8217;re in a new one, but the Project Instructions and uploaded files create an informed baseline that every conversation starts from.</p><p>Projects are available on Claude.ai Pro ($20/month) and Team plans. The free tier does not include Projects.</p><p>What Projects are not: they&#8217;re not magic memory that carries conversation context from one chat to the next automatically. They&#8217;re a persistent context layer &#8212; instructions and reference materials always present, not a conversation transcript Claude reads before responding.</p><p>The use case is simpler than the feature sounds. Instead of starting every conversation with &#8220;I&#8217;m a VP of Marketing at a B2B SaaS company focused on mid-market healthcare, here&#8217;s our ICP, here&#8217;s our messaging framework, here&#8217;s our style guide&#8221; &#8212; you write that once in Project Instructions and never type it again.</p><div><hr></div><h2>Step 2: Setting Up Your First Project</h2><p>Go to claude.ai. In the left sidebar, you&#8217;ll see a &#8220;Projects&#8221; section with a &#8220;Create project&#8221; option. Click it, give it a name, and you&#8217;re in.</p><p>The two things to configure immediately:</p><p><strong>Project Instructions.</strong> This is the highest-leverage thing you&#8217;ll do in this entire guide. Click &#8220;Set project instructions&#8221; (or &#8220;Edit project instructions&#8221; if you&#8217;ve been here before) and write your context. Be specific. A vague instruction produces vague improvement &#8212; or none at all.</p><p>Most people write something like: &#8220;You are a helpful assistant for my marketing work.&#8221; That&#8217;s not an instruction. That&#8217;s ambient noise.</p><p>A useful instruction looks like this:</p><pre><code><code>You are a writing and strategy assistant for [Name], VP of Marketing at [Company].
[Company] is a B2B SaaS platform for mid-market healthcare operations. Our ICP is:
Operations directors and CMOs at independent physician groups (20-200 physicians),
$5M-$50M revenue, typically running Epic or athenahealth.

Core product: [product name]. Key outcomes we sell: [outcome 1], [outcome 2], [outcome 3].
Primary competitor: [Competitor]. Our main differentiators vs. them: [differentiator 1], [differentiator 2].

When helping with written content, match our brand voice: direct, clinical without being cold,
evidence-based. We do not use passive voice, we do not use jargon like "leverage" or "synergies,"
and we do not make claims we can't support with data.

When helping with strategy, push back on assumptions. Ask for the data before accepting a premise.</code></code></pre><p>That&#8217;s an instruction. It shapes every single conversation in the project without you having to re-establish context.</p><p><strong>Uploaded files.</strong> Once instructions are set, upload your reference documents. Click the paperclip or file icon inside the project to add files. Start with the documents you re-paste most often.</p><div><hr></div><h2>Step 3: Your First Working Result</h2><p>With instructions set and at least one file uploaded, open a new conversation inside the project. Notice that you don&#8217;t need to introduce yourself or explain what you&#8217;re working on. Claude has the context from your instructions.</p><p>Ask it something you&#8217;d normally spend two minutes front-loading with context. &#8220;Review this email against our brand voice guidelines&#8221; &#8212; paste the email, nothing else. Claude knows your voice guidelines from the instructions. &#8220;Does this positioning statement align with how we differentiate from [Competitor]?&#8221; &#8212; paste the statement. Claude has the competitor context.</p><p>This is the shift. The context is infrastructure now, not a prompt tax you pay every conversation.</p><p>If something comes back off, it means your instructions need to be more specific. Edit them. Add a constraint. Add an example of what you want. Instructions are a living document &#8212; the best Project Instructions are ones that have been refined through five conversations, not written once and never touched.</p><p>The rule: if you find yourself correcting Claude for the same thing in multiple conversations, that correction belongs in your Project Instructions.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-projects">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Cowork for HR]]></title><description><![CDATA[HR operates on cycles. Hiring cycles. Onboarding cycles. Performance review cycles. Engagement survey cycles. Headcount planning cycles.]]></description><link>https://www.gtmaipodcast.com/p/claude-cowork-for-hr</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-cowork-for-hr</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:50:31 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f00b7e27-46b2-4fcb-a171-c75646d60043_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>HR operates on cycles. Hiring cycles. Onboarding cycles. Performance review cycles. Engagement survey cycles. Headcount planning cycles. Every cycle starts the same way &#8212; with the same pipeline review, the same pre-calibration prep, the same survey export you&#8217;re staring at trying to find the story inside it.</p><p>The work is real. The problem is that it&#8217;s reactive. Something slips through onboarding because no one checked. A performance calibration runs long because no one did the distribution analysis ahead of time. A pulse score drops and leadership finds out in the all-hands instead of in the HR report.</p><p>Cowork threads know your cycles. You seed them once &#8212; with your hiring context, your review rubric, your onboarding checklist &#8212; and every time that cycle comes back around, the thread already has everything it needs to get you to the analysis fast. The work still happens. Cowork makes sure it happens systematically, not reactively.</p><p>This guide sets up five persistent threads for the recurring rhythms HR owns: recruiting pipelines, onboarding readiness, performance review prep, employee pulse, and headcount planning.</p><div><hr></div><h2>What You&#8217;ll Build</h2><p>Five persistent threads, each with a job:</p><ol><li><p><strong>Recruiting Pipeline Monitor</strong> &#8212; weekly view of which roles are at risk and what sourcing needs to happen</p></li><li><p><strong>Onboarding Readiness Tracker</strong> &#8212; weekly flag of what&#8217;s not ready before Day 1</p></li><li><p><strong>Performance Review Prep Thread</strong> &#8212; calibration-ready summaries with distribution analysis and equity flags</p></li><li><p><strong>Employee Pulse Monitor</strong> &#8212; monthly survey analysis that surfaces what leadership needs to act on</p></li><li><p><strong>Headcount Planning Thread</strong> &#8212; quarterly scenario modeling against budget and attrition data</p></li></ol><div><hr></div><h2>Step 1: Setup</h2><p>Claude Cowork runs inside Claude.ai. On desktop, find Projects in the left sidebar &#8212; that&#8217;s the persistent context layer where threads live. On mobile (iOS and Android), same app, same threads.</p><p>Name your threads clearly. You&#8217;ll be checking some of these from your phone between meetings:</p><ul><li><p><code>HR: Recruiting Pipeline</code></p></li><li><p><code>HR: Onboarding Readiness</code></p></li><li><p><code>HR: Performance Review</code></p></li><li><p><code>HR: Employee Pulse</code></p></li><li><p><code>HR: Headcount Planning</code></p></li></ul><p>No integrations required. No API keys. No IT ticket. The threads work from copy-paste &#8212; whatever you can pull from your ATS, HRIS, or survey tool, you drop in.</p><div><hr></div><h2>Step 2: Recruiting Pipeline Monitor</h2><p>The weekly recruiting sync happens because someone needs to ask the questions that should already be answered: which roles are behind, where the pipeline is thin, what sourcing isn&#8217;t working. It&#8217;s a reactive meeting. It doesn&#8217;t have to be.</p><p>Open your <code>HR: Recruiting Pipeline</code> thread and paste this to seed it:</p><pre><code><code>You are my recruiting pipeline analyst. I'm going to give you context on our open roles,
hiring manager priorities, target start dates, and sourcing channels. Hold this context
across all future updates I give you.

Open roles: [paste your current open req list with department, level, hiring manager,
target start date, sourcing channels active]

For each role, track: stage distribution (applied / screened / interview / offer),
days open, and any sourcing context I give you.</code></code></pre><p>Then each week, paste your pipeline update and use this prompt:</p><pre><code><code>Here's the recruiting pipeline update: [paste &#8212; role, stage, count, status]

Produce:
1. Which roles are at risk of missing the target start date and why
2. Where the pipeline is thinning &#8212; stages with fewer candidates than needed
3. Recommended sourcing actions for the 2 most critical open roles
4. What I should communicate to hiring managers today</code></code></pre><p>Recruiting updates that used to require a 30-minute weekly sync get to the point in 5 minutes. The thread remembers your roles, your hiring managers, your timelines. You&#8217;re not re-explaining context every week &#8212; you&#8217;re just dropping in new data.</p><div><hr></div><h2>Step 3: Your First Result</h2><p>Before you read further, run the pipeline thread once with a real update.</p><p>Paste your current open roles and one week of pipeline data. Review what comes back. You&#8217;re looking for the thread to flag the right risks &#8212; roles that are old and thin at the top of funnel, or roles where the interview stage has stalled.</p><p>The output won&#8217;t be perfect on the first pass. Add context where it&#8217;s missing: tell the thread which roles have the most pressure, which hiring managers need proactive communication, whether you&#8217;re capacity-constrained on the sourcing side. The thread holds all of it.</p><p>On mobile, the thread is right where you left it. Before your Monday morning recruiting review, open the app, pull up the thread, check the flags. You walk into the meeting knowing what matters.</p><p>That&#8217;s the first win. The rest is below the line.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-cowork-for-hr">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Cowork for Legal]]></title><description><![CDATA[Legal teams are reactive by design. A contract request arrives. A regulatory deadline surfaces. A matter escalates.]]></description><link>https://www.gtmaipodcast.com/p/claude-cowork-for-legal</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-cowork-for-legal</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:49:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/53239bd7-934a-4954-b74a-6911590e56aa_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Legal teams are reactive by design. A contract request arrives. A regulatory deadline surfaces. A matter escalates. The work happens in response to events &#8212; and then the team scrambles to reconstruct what happened, what&#8217;s pending, and what&#8217;s about to be late. Every week, some version of the same question gets asked: &#8220;Where are we on that contract?&#8221; or &#8220;When does that regulation take effect?&#8221;</p><p>Claude Cowork adds a proactive layer. Persistent agent threads that maintain context across sessions &#8212; running on Desktop, iOS, and Android &#8212; mean the contract queue is visible before stakeholders complain, the renewal window is caught before it closes, and the regulatory update is briefed before leadership asks. The legal team that knows what&#8217;s coming has more leverage than the one that finds out when it lands.</p><p>One important framing before we get into the setup: Cowork assists with legal workflows &#8212; drafting, organizing, summarizing, tracking &#8212; but does not provide legal advice. Every output from these threads should be reviewed by qualified legal counsel before use. These are operational tools for managing the volume and visibility of legal work, not substitutes for legal judgment.</p><div><hr></div><h2>What You&#8217;ll Build</h2><p>By the end of this guide you&#8217;ll have two persistent Cowork threads running without you touching them again: a weekly contract queue monitor and a monthly regulatory update tracker. You&#8217;ll also have the exact prompt templates to build three more threads &#8212; matter status digest, contract renewal monitor, and policy review queue &#8212; copy-paste ready, not illustrative sketches.</p><div><hr></div><h2>Step 1: Get Set Up</h2><p><strong>What plan you need:</strong> Claude Cowork requires Claude Teams or Claude Max. For a legal team with 2&#8211;6 attorneys and paralegals, Claude Teams at $30/user/month is the right tier. Solo in-house counsel can run it on Max.</p><p><strong>Accessing Cowork:</strong> On Desktop (Mac/Windows), open Claude and look for &#8220;Cowork&#8221; in the left sidebar &#8212; it&#8217;s a dedicated section below your regular conversations. On iOS and Android, it&#8217;s under the same sidebar menu. The interface is a thread list, not a single chat. Each thread has a name, a schedule, and its own persistent context.</p><p><strong>The one setup step that changes everything:</strong> Go to Settings &#8594; Notifications &#8594; Cowork Delivery. Turn on &#8220;Notify me when a scheduled thread completes.&#8221; You want a push notification when the contract queue lands on Monday morning, not to remember to go look for it. The difference between a tool you use and a tool that works for you is whether it interrupts you with results or waits for you to remember to check.</p><div><hr></div><h2>Step 2: Create Your First Thread &#8212; Contract Queue Monitor</h2><p>This is the one that solves the most immediate pain. Contract requests pile up. SLAs slip. Business stakeholders send follow-up emails that start politely and get less polite. The queue monitor makes the backlog visible before it becomes a relationship problem.</p><p>Click &#8220;New Cowork Thread&#8221; in the sidebar. Name it &#8220;Contract Queue Monitor.&#8221; Schedule it weekly &#8212; Mondays at 7 AM is the right call. You want this before the week starts, not after your inbox has already filled with questions about it.</p><p><strong>Seed the thread with four things before the first run:</strong></p><ul><li><p>The contract types you handle (MSA, NDA, SOW, vendor agreement, etc.)</p></li><li><p>The standard SLA for each type (NDA: 3 business days; MSA: 10 business days, etc.)</p></li><li><p>Your escalation criteria (deal size threshold, counterparty risk tier, regulatory exposure, etc.)</p></li><li><p>Key stakeholders waiting on contracts (Sales, Procurement, Finance &#8212; whoever you need to communicate status back to)</p></li></ul><p>Here&#8217;s the full prompt:</p><pre><code><code>You are my weekly contract queue manager. I'll paste the current contract queue into this thread each Monday. Use the following context to analyze it:

CONTRACT TYPES AND SLAs:
[Paste your contract types and turnaround standards here]

ESCALATION CRITERIA:
[Paste your escalation triggers here &#8212; e.g., contracts over $500K, contracts touching regulated data, deals in final-stage pipeline]

KEY STAKEHOLDERS:
[Name the teams/roles waiting on contracts and what they need to know]

Each Monday when I paste the queue (format: contract name, type, requestor, date received, current status), produce four outputs:

1. CONTRACTS PAST SLA &#8212; List every contract past its turnaround standard. For each: how far past SLA it is, and what the downstream business impact is (deal at risk, vendor relationship, compliance deadline).

2. ESCALATION FLAGS &#8212; Any contracts that meet the escalation criteria we defined. Why they qualify and what action is needed.

3. STAKEHOLDER COMMUNICATIONS &#8212; Draft the status update to send to business stakeholders waiting on contracts. Specific by team: what's done, what's in review, what they should expect and when.

4. ONE-THING CALL &#8212; The single action that would most reduce queue backlog this week. Not a list of things. The one thing.

All outputs should be reviewed by qualified legal counsel before use or communication.</code></code></pre><p>After pasting the prompt, hit &#8220;Schedule.&#8221; Then reply to the thread with your SLA table and escalation criteria. Claude will hold that context for every future run &#8212; you don&#8217;t re-explain it every week.</p><div><hr></div><h2>Step 3: Your First Result</h2><p>The first queue run will land Monday morning. You paste the queue list (contract name, type, requestor, date received, status &#8212; takes 5 minutes to pull from your tracker), and Claude returns four outputs: the past-SLA report, escalation flags, stakeholder communications, and the one-thing call.</p><p>After the first run, do one thing: reply to the thread with a sentence or two about your team&#8217;s current capacity constraints. Something like: &#8220;We&#8217;re currently a team of two with one attorney on leave through month-end.&#8221; The prioritization gets sharper when Claude understands what&#8217;s actually available.</p><p>From your phone mid-week, you can reply to the thread with a follow-up: &#8220;What&#8217;s the downstream impact if the Acme MSA slips another week?&#8221; Claude knows the context &#8212; the contract, the SLA, the stakeholder &#8212; and gives you a specific answer rather than a generic one.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-cowork-for-legal">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Cowork for Data Teams]]></title><description><![CDATA[Data teams are the most requested, most under-resourced, most reactive function in most companies. The ad hoc request queue never empties.]]></description><link>https://www.gtmaipodcast.com/p/claude-cowork-for-data-teams</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-cowork-for-data-teams</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:48:03 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/986b1ca4-a19a-466a-a3d3-cbefedf06f53_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Data teams are the most requested, most under-resourced, most reactive function in most companies. The ad hoc request queue never empties. The data quality issue is always someone else&#8217;s problem until it becomes everyone&#8217;s problem &#8212; usually on a Thursday, usually when a VP is about to walk into a board meeting with a number that&#8217;s wrong. The insight that leadership needs is buried in a dashboard they&#8217;ve never opened.</p><p>And through all of it, the data team is expected to do analysis. Strategic work. The work that actually moves the business. Instead, they&#8217;re triaging Slack messages at 9 PM because someone noticed a metric changed and nobody knows why.</p><p>Claude Cowork is persistent agent threads in Claude Desktop, iOS, and Android. You build a thread once &#8212; teach it your data sources, your quality thresholds, your metric definitions, your stakeholder context &#8212; and it runs. Every week, you paste in the data. What comes back is a structured brief that replaces 90 minutes of manual work with 10 minutes of review. The analysis doesn&#8217;t live in your head anymore. It lives in the thread. And the thread remembers.</p><div><hr></div><h2>What You&#8217;ll Build</h2><p>Five persistent threads that cover the data team operating cadence:</p><ol><li><p><strong>Data Quality Monitor</strong> &#8212; Weekly thread that catches quality issues before they become dashboard disasters</p></li><li><p><strong>Metric Anomaly and Insight Digest</strong> &#8212; Weekly brief that turns raw numbers into the insight leadership actually needs</p></li><li><p><strong>Reporting Cadence Manager</strong> &#8212; Weekly thread tracking every report, every SLA, and every delivery risk</p></li><li><p><strong>Stakeholder Request Queue Manager</strong> &#8212; Weekly prioritization of the ad hoc request queue &#8212; by logic, not FIFO</p></li><li><p><strong>Data Strategy and Roadmap Thread</strong> &#8212; Monthly thread that turns recurring pain points into a roadmap your leadership will approve</p></li></ol><div><hr></div><h2>Step 1: Setup</h2><p><strong>Time required:</strong> 20 minutes. No developer. No API.</p><p><strong>Access:</strong> Claude.ai Pro or Team plan. Desktop app is recommended for data work &#8212; you&#8217;ll be pasting exports and reports regularly, and the desktop clipboard handling is faster than mobile.</p><p><strong>The one thing to do before you build threads:</strong> Document your context once in a master setup note. Your data sources. Your key metrics and their definitions. Your stakeholders and what they care about. Your team capacity. You&#8217;ll paste pieces of this into each thread seed &#8212; having it in one place means setup takes 20 minutes instead of an hour.</p><p><strong>Thread naming convention:</strong></p><ul><li><p><code>Data Quality Monitor &#8212; [Team/Quarter]</code></p></li><li><p><code>Metric Digest &#8212; Weekly</code></p></li><li><p><code>Reporting Cadence &#8212; [Quarter]</code></p></li><li><p><code>Request Queue &#8212; [Month]</code></p></li><li><p><code>Data Strategy &#8212; [Quarter]</code></p></li></ul><div><hr></div><h2>Step 2: Data Quality Monitor Thread</h2><p>This is the thread that catches what your dashboards miss &#8212; before your stakeholders find it first.</p><p>Create a thread named <code>Data Quality Monitor</code>. No schedule needed; you&#8217;ll trigger this manually each week when you run your quality checks.</p><p><strong>Seed prompt &#8212; paste this first:</strong></p><pre><code><code>DATA QUALITY MONITOR SETUP

You are my weekly data quality analyst. I'll paste a data quality report each week.
Your job is to flag issues before they reach stakeholders or break downstream reports.

My key data sources and their SLAs:
- [Source 1 name]: refreshes [daily/hourly], SLA is [X hours after expected refresh], downstream: [reports/decisions it feeds]
- [Source 2 name]: refreshes [weekly], SLA is [X], downstream: [...]
- [Source 3 name]: refreshes [real-time], SLA is [X], downstream: [...]

Quality dimensions I track:
- Completeness: [key fields that must not be null &#8212; e.g., "customer_id, transaction_date, revenue"]
- Accuracy: [fields with known ranges or validation rules &#8212; e.g., "revenue should not exceed $500K for a single transaction"]
- Freshness: [last-refresh timestamp per source, acceptable lag]
- Consistency: [cross-source checks &#8212; e.g., "order count in Shopify should match order count in warehouse"]

My quality thresholds:
- [Source] completeness: flag if [field] null rate exceeds [X]%
- [Source] freshness: flag if last refresh is older than [X hours]
- [Source] accuracy: flag if [metric] outside [range]</code></code></pre><p><strong>Weekly prompt &#8212; paste this each week with your report:</strong></p><pre><code><code>Here's this week's data quality report: [paste &#8212; source, metric, current value, threshold]

Flag:
1. Any metric below threshold &#8212; how far off, and which downstream reports or decisions are affected
2. Any data source that's stale &#8212; last refresh timestamp and business impact
3. The one data quality issue most likely to cause a business stakeholder to lose trust in our data
4. What the data engineering team needs to prioritize this week</code></code></pre><p>The thread builds a running record of quality patterns across weeks. By month three, you&#8217;ll know which sources fail on the same day every month, which fields are structurally unreliable, and which downstream reports are most exposed. That&#8217;s the intelligence that turns reactive firefighting into proactive infrastructure work.</p><div><hr></div><h2>Step 3: First Run</h2><p>Paste the seed prompt. Then paste your first quality report &#8212; even if it&#8217;s just a rough export from your monitoring tool or a manual check. Claude will map the structure to your setup.</p><p>After the first run, add two things to your seed prompt: (1) the names of the specific dashboards or reports that are downstream of each source, and (2) the names of the stakeholders who own those dashboards. When a quality issue hits, the output changes from &#8220;this affects the revenue dashboard&#8221; to &#8220;this affects the revenue dashboard that Sarah&#8217;s leadership team reads every Monday.&#8221; That specificity is what makes the brief actually actionable.</p><p>Quality issues caught before the dashboards are wrong. That&#8217;s the job.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-cowork-for-data-teams">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Code for Data Teams]]></title><description><![CDATA[Data teams are the translators. Between raw data and business decisions. Between what's in the database and what the dashboard shows. Between what a stakeholder asks for and what they actually need.]]></description><link>https://www.gtmaipodcast.com/p/claude-code-for-data-teams</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-code-for-data-teams</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:46:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bcd37fd4-025f-45b2-9dca-62897fc2d0c8_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Data teams are the translators. Between raw data and business decisions. Between what&#8217;s in the database and what the dashboard shows. Between what a stakeholder asks for and what they actually need.</p><p>The problem isn&#8217;t data. You have more data than you can act on. The problem is the gap between the data existing and the data meaning something &#8212; and that gap is filled by a small team of analysts who are already maxed out.</p><p>The backlog looks familiar: the EDA your lead analyst started but hasn&#8217;t finished because three other things landed. The data quality audit that&#8217;s been &#8220;on the roadmap&#8221; for two quarters. The metric definitions that live in nobody&#8217;s head and everywhere&#8217;s Slack. The dashboard spec that gets handed to engineering missing half the decisions, then comes back missing half the insight.</p><p>Claude Code doesn&#8217;t replace your analysts. It removes the parts of their day that don&#8217;t require them &#8212; the mechanical first passes, the formatting, the query scaffolding, the document structuring &#8212; so they can spend their time on the judgment calls that actually do.</p><p>The 1M token context window is the thing that makes this different for data teams specifically. You can paste the full schema, the messy CSV, and the stakeholder request simultaneously and get output that&#8217;s grounded in the actual data structure &#8212; not a generic template that assumes a clean warehouse you don&#8217;t have.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ol><li><p>An exploratory data analysis brief (from raw CSV to insight summary in 20 minutes)</p></li><li><p>A data quality audit report with a prioritized remediation plan</p></li><li><p>A metric definition document your data catalog has been waiting for</p></li><li><p>A dashboard specification that actually guides engineers</p></li><li><p>SQL queries with plain-English explanations and performance flags</p></li></ol><div><hr></div><h2>Step 1: What You Need and How to Start</h2><p>Claude Code is available on the <strong>Max</strong> or <strong>Team</strong> plan, inside Claude.ai.</p><p>Before you paste data or give a task, set context first. Data analysts skip this and then wonder why the output reads like it was written for a general audience.</p><pre><code><code>I'm a [Data Analyst / Analytics Engineer / Head of Data] at [Company Name].
We sell [what you sell] to [who you sell to]. Our data stack is [Snowflake/BigQuery/
Redshift + dbt / Looker / etc.]. When I ask you to analyze data or write queries,
prioritize business interpretability over technical precision &#8212; flag tradeoffs when
they exist. I'll give you specific tasks in a moment.</code></code></pre><p>That context statement changes what you get back. Do it every session.</p><div><hr></div><h2>Step 2: Exploratory Data Analysis Brief</h2><p>You&#8217;ve received a new dataset. Maybe it&#8217;s from a product team handoff. Maybe it&#8217;s a vendor export. Maybe it&#8217;s your own data that nobody has formally characterized. The first move is always the same: understand what you have before you analyze it.</p><p>Paste your CSV &#8212; or a representative sample (1,000&#8211;5,000 rows) if it&#8217;s large &#8212; and use this prompt:</p><pre><code><code>Run an exploratory data analysis on this dataset and produce:

1. Dataset overview &#8212; number of rows, columns, data types, and what each column
   appears to represent based on its name and values
2. Data quality issues &#8212; nulls by column (flag anything &gt;5%), potential duplicates,
   and values that look like outliers or errors
3. Key distributions &#8212; for numeric columns: mean, median, range, and any notable
   skew; for categorical columns: top values and their frequencies
4. Correlations &#8212; pairs of variables with notable relationships, positive or negative
5. The 3 most interesting findings in this data &#8212; what a senior data analyst would
   flag for further investigation
6. Recommended next analysis questions based on what you see

Format the output as a structured brief I can share with stakeholders or use to scope
the next phase of analysis.

Here is the dataset:
[paste CSV]</code></code></pre><p>EDA that would take a full day of Pandas and matplotlib &#8212; building the environment, writing the profiling code, formatting the output, writing the summary &#8212; in 20 minutes. The output isn&#8217;t a replacement for deep analysis. It&#8217;s the starting point you&#8217;d spend a day building before you could even get to the interesting questions.</p><p>The 3 most interesting findings prompt is the part most EDA tools skip. You don&#8217;t just want the statistics. You want the interpretive layer &#8212; the thing that tells you where to look next.</p><div><hr></div><h2>Step 3: Save Your Data Profile as a Reusable Reference</h2><p>Before you close the session, ask Claude Code to produce a one-page data dictionary from the EDA output &#8212; column names, inferred data types, description of each field, and any quality flags. Save it.</p><pre><code><code>Based on the EDA you just ran, produce a data dictionary in table format:
| Column Name | Data Type | Description | Quality Flags |

This will serve as the canonical reference for this dataset going forward. Keep
descriptions to one sentence. Flag any columns with quality concerns in the
Quality Flags column.</code></code></pre><p>Now you have documentation that didn&#8217;t exist five minutes ago &#8212; and every analyst who touches this dataset in the future doesn&#8217;t have to reverse-engineer it from scratch.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-code-for-data-teams">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Code for Legal]]></title><description><![CDATA[The bottleneck in most legal teams isn&#8217;t the law.]]></description><link>https://www.gtmaipodcast.com/p/claude-code-for-legal</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-code-for-legal</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:45:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/44468ead-baf3-42f0-823b-03593a2da6b4_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The bottleneck in most legal teams isn&#8217;t the law. It&#8217;s bandwidth.</p><p>A contract sits in the queue for a week because the attorney is handling three deals and a regulatory filing. An NDA that should take 20 minutes to review is still waiting on Thursday because Monday&#8217;s stack never cleared. The GC knows there are auto-renewals triggering next month &#8212; they just haven&#8217;t had time to pull the report and look.</p><p>Claude Code doesn&#8217;t fix the law. It fixes the bandwidth problem. It handles the first pass &#8212; the summary, the comparison, the risk flag &#8212; so that when the attorney&#8217;s time arrives, it&#8217;s spent on judgment, not on reading the same boilerplate for the fourth time.</p><p>That&#8217;s the trade. Not AI as lawyer. AI as first-pass analyst that compresses the pre-work.</p><p>One caveat that I&#8217;ll say once and mean throughout: Claude Code assists with legal workflows. Every output produced using these prompts requires review by qualified legal counsel before use. This is a tool for the first pass &#8212; not the final word.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ol><li><p>A contract summary generator &#8212; plain-English review of any agreement in minutes, not days</p></li><li><p>A contract clause comparator &#8212; side-by-side analysis across multiple vendor agreements</p></li><li><p>A contract database audit &#8212; visibility across your full contract portfolio before things expire</p></li><li><p>A policy gap analyzer &#8212; systematic identification of what your policy library is missing</p></li><li><p>An NDA redline analyzer &#8212; pre-negotiation analysis that compresses the back-and-forth</p></li></ol><div><hr></div><h2>Step 1: Understand What You&#8217;re Working With</h2><p>Claude Code is Anthropic&#8217;s AI environment at claude.ai. You describe what you want to analyze &#8212; in plain English &#8212; and Claude works through it. No legal software required, no specialized tools. The skill is learning to be specific about what you paste in and what you want to come out.</p><p>The 1 million token context window is what makes this practical for legal work. You can paste an entire contract &#8212; full text, every exhibit, every schedule &#8212; and Claude Code has the whole thing in front of it at once. It&#8217;s not summarizing what it skimmed. It&#8217;s reading all of it.</p><p><strong>To get started:</strong> Go to claude.ai, start a new project, and open Claude Code. You need a paid plan (Pro or Teams).</p><div><hr></div><h2>Step 2: Build Your Contract Summary Generator</h2><p>This is where most legal teams find the immediate win. Take any contract &#8212; NDA, vendor agreement, customer agreement, SOW &#8212; and paste the full text. Then run this prompt:</p><pre><code><code>I'm reviewing this contract and need a structured first-pass analysis to inform discussion
with counsel. Please produce:

1. One-paragraph plain-English summary &#8212; what this agreement does, who the parties are,
   what each party commits to

2. Key commercial terms &#8212; term length, payment terms, termination rights, auto-renewal
   clauses, notice periods

3. Top 5 risk provisions &#8212; clauses that favor the other party or create material exposure
   for us. For each: quote the relevant language, explain the risk in plain English

4. Missing standard protections &#8212; what's absent that we'd typically expect in this type
   of agreement (indemnification, limitation of liability, IP ownership, etc.)

5. 3 questions I should ask before signing or before sending to counsel for review

Note: this is a first-pass review to inform discussion with counsel, not legal advice.

[PASTE FULL CONTRACT TEXT HERE]</code></code></pre><p><strong>What you get back:</strong> A structured analysis that tells you what you&#8217;re looking at, where the risk lives, and what&#8217;s missing &#8212; in language anyone on the team can act on. The attorney who reviews it next is starting from a flagged document, not a cold read.</p><p>This isn&#8217;t a lawyer replacement. It&#8217;s the first-pass that saves the lawyer time. The difference between &#8220;here&#8217;s a contract, what do you think?&#8221; and &#8220;here&#8217;s a contract &#8212; here are the five clauses I flagged, here are the three missing protections, what&#8217;s your take?&#8221; is about four hours of attorney time per agreement.</p><div><hr></div><h2>Step 3: Your First Result</h2><p>Most teams run this the first time on a contract they already know. An NDA they&#8217;ve reviewed. A vendor agreement that&#8217;s been signed for two years. They want to verify the output before they trust it.</p><p>What usually happens: the summary is accurate. The commercial terms section catches the auto-renewal they&#8217;d forgotten about. The risk provisions flag something the team knew was there but hadn&#8217;t articulated clearly. And the &#8220;missing standard protections&#8221; section surfaces something they&#8217;d missed &#8212; or a conversation they should have had before signing.</p><p>Take that result seriously. The point of the first pass isn&#8217;t the final answer &#8212; it&#8217;s making sure nothing gets through unchecked because the queue was too deep.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-code-for-legal">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Code for HR]]></title><description><![CDATA[HR has more data than it acts on.]]></description><link>https://www.gtmaipodcast.com/p/claude-code-for-hr</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-code-for-hr</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:44:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ae348035-5653-4ed7-871d-8ba4ae011fc2_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>HR has more data than it acts on.</p><p>Compensation data sits in a spreadsheet that nobody has time to audit for equity. Engagement survey verbatims pile up unread because tagging takes too long. Attrition data lives in an HRIS export that nobody has analyzed since the last time someone left. Headcount modeling happens in a CFO conversation nobody prepped for. Job architecture stays inconsistent because the job that would fix it takes a week no one can find.</p><p>Claude Code reads all of it. The 1 million token context window means you can paste three years of attrition data, the full compensation file, and the engagement survey results simultaneously &#8212; and ask a single question that connects all three. No SQL. No data analyst. No six-week consulting engagement.</p><p>This guide walks through five data-heavy HR workflows that live in spreadsheets and never get the analysis they deserve.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ol><li><p><strong>A compensation equity analyzer</strong> &#8212; flags pay gaps, market risk, and recommended adjustments in a format you can bring to the CFO</p></li><li><p><strong>A headcount and org design model</strong> &#8212; span of control, layer analysis, cost concentration, and two headcount scenarios</p></li><li><p><strong>An engagement survey analyzer</strong> &#8212; themes from verbatims, score movements, and the one intervention most likely to move the needle</p></li><li><p><strong>A job architecture builder</strong> &#8212; titles, families, levels, and career pathing from your current role inventory</p></li><li><p><strong>An attrition analysis and prediction model</strong> &#8212; who&#8217;s leaving, when, and the leading indicators in your current workforce</p></li></ol><div><hr></div><h2>Step 1: Understand What You&#8217;re Working With</h2><p>Claude Code is the AI environment inside Claude.ai. You describe the analysis you want &#8212; in plain English &#8212; and Claude builds it. No Python required. No pivot table expertise. The skill is learning to be specific about two things: what data you&#8217;re putting in, and what output you need to do something with.</p><p>The 1 million token context window is what makes this work for HR. It means you&#8217;re not sampling your compensation data or summarizing your survey results before Claude sees them. You&#8217;re pasting the full file &#8212; every row &#8212; and asking Claude to find what&#8217;s actually in it.</p><p><strong>To get started:</strong> Go to claude.ai, open a new project, and start Claude Code. You need a paid plan (Pro or Teams).</p><div><hr></div><h2>Step 2: Compensation Equity Analyzer</h2><p>Compensation equity analysis is one of those things HR knows needs to happen and rarely gets done because it requires either a consultant or a data analyst who doesn&#8217;t have the bandwidth. Claude Code turns it into an afternoon.</p><p>Pull your compensation data from your HRIS. You want at minimum: employee ID (anonymized as needed), role, level, department, base salary, bonus target, tenure, and gender or other demographic dimensions if tracked and if your organization has a policy for including them in equity analysis. Remove names. You can keep anonymized IDs if you need to trace back to specific employees later.</p><p>Paste the data and use this prompt:</p><pre><code><code>Analyze this compensation data for equity issues. Produce:

1. Average and median compensation by role and level &#8212; flag any roles with wide ranges
   that suggest inconsistency in how we've been setting pay
2. Any statistically significant compensation gaps by demographic dimension (if data is
   included)
3. Employees most likely to be at market risk &#8212; below the 25th percentile for their
   role and level
4. Recommended adjustments by priority &#8212; highest flight risk first
5. An equity summary I can bring to the CEO and CFO with the business case for
   adjustments

[PASTE COMPENSATION DATA]</code></code></pre><p><strong>What you get back:</strong> Compensation equity analysis that used to require a consultant takes an afternoon. The summary section is particularly useful &#8212; it frames the adjustments as a retention and risk story, not just a fairness story, which is the frame that moves budget conversations.</p><p>One thing to do next: ask Claude Code to model the cost of bringing the flagged employees to the 50th percentile for their role. That number is almost always smaller than leadership expects, and having it ready turns a principle discussion into a budget decision.</p><div><hr></div><h2>Step 3: Your First Result</h2><p>Run the compensation analysis before you read further. Use real data &#8212; even if it&#8217;s just one department to start.</p><p>What you&#8217;re looking for in the output: is the analysis surfacing things your team already suspects but hasn&#8217;t been able to quantify? Are there roles where the pay range is so wide it suggests the title is being used for two distinct levels of work? Is there a tenure band where employees are clustering at below-market pay?</p><p>The equity summary section matters most. Before you take anything to the CEO or CFO, read it like they will. If the business case isn&#8217;t clear in the first paragraph, ask Claude Code to sharpen it: &#8220;Rewrite the executive summary to lead with retention risk and cost, then move to the equity case.&#8221;</p><p>That&#8217;s the first output worth taking somewhere. The rest is below the line.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-code-for-hr">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Code for Finance]]></title><description><![CDATA[Finance teams are the most spreadsheet-native function in the company.]]></description><link>https://www.gtmaipodcast.com/p/claude-code-for-finance</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-code-for-finance</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:42:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fb1752d0-c1d0-47fa-96d4-19a3cbf46c4f_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Finance teams are the most spreadsheet-native function in the company. The CFO can build a pivot table faster than most engineers can spin up a dev environment. The FP&amp;A analyst has a working knowledge of Excel that most people would recognize as a separate professional skill. The controller has a multi-tab model that took six months to build and would take another six months to explain to anyone outside the department.</p><p>None of that changes with AI.</p><p>What does change: the part that comes after the model. The analysis. The narrative. The 45-minute slog of turning numbers into an explanation a non-financial executive can act on. The quarterly variance report that gets emailed out as a spreadsheet attachment and read by exactly one person. The scenario analysis that lives in a separate tab that nobody opens because the formulas are nested five layers deep and labeled &#8220;Version_FINAL_v3.&#8221;</p><p>Finance doesn&#8217;t need AI to replace its models. It needs something that can read the full model &#8212; the entire GL export, the full budget, three quarters of actuals &#8212; and identify the signal. Then write the narrative that turns numbers into decisions.</p><p>Claude Code&#8217;s 1 million token context window means you can paste all of it simultaneously. The analysis runs against the complete picture, not a sample. The variance isn&#8217;t calculated on a subset of departments. The scenario isn&#8217;t modeled from a summary tab. The whole thing is in the room.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ol><li><p>A budget vs. actual variance analyzer &#8212; surfaces the 10 variances that matter, writes the CFO summary for you</p></li><li><p>A financial model scenario builder &#8212; three scenarios, sensitivity analysis, board-ready narrative</p></li><li><p>A cash flow and runway calculator &#8212; burn rate, runway math, cash flow positive milestone</p></li><li><p>A department P&amp;L builder &#8212; contribution by department from your GL export</p></li><li><p>A financial KPI dashboard &#8212; HTML dashboard with traffic light status, reusable every week</p></li></ol><div><hr></div><h2>Step 1: Understand What You&#8217;re Working With</h2><p>Claude Code is Anthropic&#8217;s AI environment at claude.ai. You describe what you want in plain English &#8212; no code, no formulas, no technical specification required. The 1 million token context window is what separates this from every AI tool finance has tried before. You can paste your entire GL export. The full budget. Actuals for multiple quarters. Claude Code holds it all in context and analyzes against the complete dataset.</p><p><strong>To get started:</strong> Go to claude.ai, open Claude Code. You need a paid plan (Pro or Teams).</p><p><strong>The setup that changes every output:</strong> Before you paste any data, open with a context statement.</p><pre><code><code>I'm a [CFO / VP Finance / FP&amp;A Director / Controller] at [Company Name].
We are a [stage: Series B SaaS / $50M revenue professional services / etc.].
Our fiscal year runs [month to month]. When I give you financial data, prioritize:
finding variances that require management attention, identifying trends that affect
forward guidance, and writing narrative that is appropriate for board or executive
audiences. I'll give you specific tasks in a moment.</code></code></pre><p>Do this every session. It changes the register of every output you get back.</p><div><hr></div><h2>Step 2: Budget vs. Actual Variance Analyzer</h2><p>This is the workflow that usually gets a finance team&#8217;s attention. Export your budget vs. actual report &#8212; or paste it directly from your spreadsheet. You need: department, budget line item, budget amount, actual amount. That&#8217;s it.</p><pre><code><code>Analyze this budget vs. actual data and produce:

1. Top 10 variances by dollar amount &#8212; list both favorable and unfavorable,
   sorted by absolute dollar variance
2. Top 10 variances by percentage &#8212; flag anything over 20% variance,
   favorable or unfavorable
3. Department-level summary &#8212; which departments are over budget overall,
   which are under, and by how much in aggregate
4. A narrative explanation of the 3 most significant variances, written in
   the style appropriate for a CFO summary &#8212; one paragraph per variance,
   stating what the number is, what likely caused it, and what it means
5. Recommended actions for variances that require intervention &#8212; which need
   a meeting, which need a reforecast, which can wait until end of quarter

[PASTE YOUR BUDGET VS. ACTUAL DATA HERE]</code></code></pre><p><strong>What you get back:</strong> The variance table you would have built in a pivot table, plus the narrative you would have spent an hour writing. The CFO summary portion alone saves most FP&amp;A teams 30&#8211;40 minutes per reporting cycle. The recommended actions section is the thing that usually surprises people &#8212; it&#8217;s not generic. It reads the actual variance patterns and gives you specific intervention guidance.</p><p>What used to require pivot tables, conditional formatting, and manual narrative writing takes 10 minutes.</p><div><hr></div><h2>Step 3: Your First Result</h2><p>The variance report lands and something becomes clear that wasn&#8217;t before. Maybe one department is 34% over budget in a single line item, and when you see it spelled out in a paragraph rather than a cell, the cause is obvious &#8212; and so is the ask. Maybe two departments are running favorable variances large enough to fund a delayed initiative. Maybe the recommended actions section flags three variances that need intervention this week and twelve that can wait.</p><p>This is what the data has been trying to tell you. You just didn&#8217;t have time to listen.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-code-for-finance">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Code for Sales Enablement]]></title><description><![CDATA[Here&#8217;s the dirty secret of sales enablement: your content problem isn&#8217;t a creation problem.]]></description><link>https://www.gtmaipodcast.com/p/claude-code-for-sales-enablement</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-code-for-sales-enablement</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:41:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/13cd60ea-7fe3-4141-9c2b-268f8f7a3052_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here&#8217;s the dirty secret of sales enablement: your content problem isn&#8217;t a creation problem. It&#8217;s a currency problem.</p><p>The battlecard exists. It was written 18 months ago when someone had a free Friday afternoon. The objection-handling guide is in a shared drive somewhere &#8212; third subfolder, filed under &#8220;Resources (OLD).&#8221; The certification assessment was built for a product version your team hasn&#8217;t sold in two quarters.</p><p>You&#8217;re not behind because nobody wrote the content. You&#8217;re behind because the pace of change &#8212; new competitors, new pricing, new features, new reps &#8212; outruns the pace at which any two-person enablement team can keep documentation current. The bottleneck isn&#8217;t creativity. It&#8217;s throughput.</p><p>Claude Code changes the throughput equation. Not because it&#8217;s magic &#8212; because it can hold your entire knowledge base in context (1 million tokens, which is your complete product docs, your full competitive file, and your current enablement library, all at once) and produce structured, formatted, specific content in the time it used to take you to open a blank document.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ol><li><p>A battlecard generator &#8212; complete, formatted, competitor-specific</p></li><li><p>A call library analysis &#8212; objections, talk tracks, struggle patterns from your Gong exports</p></li><li><p>An onboarding curriculum &#8212; week-by-week, with daily activities and knowledge checkpoints</p></li><li><p>A certification assessment generator &#8212; 20 questions, answer key, difficulty ratings</p></li></ol><div><hr></div><h2>Step 1: Setup</h2><p>Claude Code is available at claude.ai. You&#8217;ll use it through the browser &#8212; no installation, no engineering required.</p><p>Before you start any of these workflows, one non-obvious principle: <strong>context depth determines output quality.</strong> Claude Code&#8217;s 1M token context window means you can paste in your entire product documentation, your competitive intelligence file, and your current enablement library simultaneously. The more specific the input, the more specific the output. Thin inputs produce thin outputs. Rich inputs produce content you can actually use.</p><p>For every workflow below, gather your source material before you open Claude Code. The upfront work is document assembly, not writing.</p><div><hr></div><h2>Step 2: Battlecard Generator</h2><p>A battlecard has a standard anatomy: competitor overview, their strengths, their weaknesses, how you win against them, how you lose, common objections, trap questions they ask, your winning proof points. Claude Code builds all of it &#8212; including the structure &#8212; from the raw inputs you feed it.</p><p><strong>What to assemble before you start:</strong></p><ul><li><p>Your product one-pager or feature summary</p></li><li><p>Your current pricing structure</p></li><li><p>The competitor&#8217;s website copy (paste the key pages)</p></li><li><p>Their pricing page</p></li><li><p>Any notes or Slack messages from reps about common objections in competitive deals</p></li></ul><p><strong>The prompt:</strong></p><pre><code><code>You are a sales enablement specialist building a competitive battlecard.

Here is our product overview:
[PASTE YOUR PRODUCT SUMMARY]

Here is our pricing:
[PASTE YOUR PRICING]

Here is information about [COMPETITOR NAME] &#8212; their website, positioning, and pricing:
[PASTE COMPETITOR CONTENT]

Here are objections our reps have heard when competing against them:
[PASTE REP NOTES / SLACK MESSAGES / GONG SNIPPETS]

Build a complete sales battlecard with these sections:
1. Competitor Snapshot (3-sentence summary of who they are and who they sell to)
2. Their Strengths (be honest &#8212; 3-4 real strengths)
3. Their Weaknesses (3-4 specific, exploitable gaps)
4. How We Win (3-5 specific scenarios where we have a clear advantage)
5. How We Lose (2-3 honest patterns &#8212; what situations favor them)
6. Top 5 Objections + Our Response (objection as a direct quote, response as talking points)
7. Trap Questions They Ask (questions they use to make us look bad, and how to reframe)
8. Our Proof Points (specific stats, case studies, or customer outcomes that apply in this
   competitive context)
9. One-Line Knockout (a single sentence a rep can drop when the deal is on the line)

Format as a clean, scannable document a rep can read in 5 minutes.</code></code></pre><div><hr></div><h2>Step 3: First Result</h2><p>Run the prompt. Read the output. The first pass will be 85-90% usable. Your job is to add the institutional knowledge Claude Code can&#8217;t infer &#8212; the deal story from last quarter, the specific customer quote that kills this competitor every time, the pricing nuance that isn&#8217;t public.</p><p>That&#8217;s 20 minutes of editing, not 3 hours of writing from scratch.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-code-for-sales-enablement">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Code for Customer Success]]></title><description><![CDATA[CS teams are sitting on the most under-analyzed dataset in the company.]]></description><link>https://www.gtmaipodcast.com/p/claude-code-for-customer-success</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-code-for-customer-success</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:39:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1d5994de-8c7d-4d7c-a224-41d00c2b766d_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>CS teams are sitting on the most under-analyzed dataset in the company.</p><p>Every churned account left signals before it left &#8212; in NPS scores that trended down, in support ticket volume that spiked, in usage data that flatlined two months before the renewal conversation that didn&#8217;t go well. The problem isn&#8217;t that the data isn&#8217;t there. It&#8217;s that synthesizing it takes hours nobody has, so you read samples instead of patterns. You pull five NPS responses to prep for the quarterly readout instead of reading all 500. You review the last three churned accounts to understand the trend instead of the last 50. And you find out about churn in the churn conversation &#8212; when the customer already made the decision.</p><p>Claude Code changes the throughput equation. The 1 million token context window means you can paste your entire NPS dataset, your full churned account history, your complete renewal pipeline &#8212; all of it at once, not a sample. Claude Code reads the whole thing and finds what the patterns actually are, not what you assume they are.</p><p>Here&#8217;s how to put it to work.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ul><li><p>An NPS verbatim analyzer that extracts churn signals, expansion signals, and product gaps from hundreds of survey responses at once</p></li><li><p>A churn prediction model built from your actual churned account history, not generic frameworks</p></li><li><p>A renewal risk scorecard that applies your churn profile to every open renewal, sorted by urgency</p></li><li><p>A QBR template builder that fills in talking points, surfaces hard topics, and identifies expansion angles from raw account data</p></li><li><p>An expansion playbook that matches your product catalog to specific accounts with specific openers</p></li></ul><div><hr></div><h2>Step 1: Understand What You&#8217;re Working With</h2><p>Claude Code is Anthropic&#8217;s AI environment. You describe what you want to analyze or build &#8212; in plain English &#8212; and Claude figures out how to do it. No programming required. The skill is learning to be specific about what goes in and what you want to come out.</p><p>The 1 million token context window is what makes this genuinely useful for CS teams, not just interesting. It means you can paste all 500 NPS responses from your last survey cycle and Claude Code reads every single one &#8212; not a representative sample, not the top 50. The same goes for churned account history: you can upload 12 months of account data at once and get analysis across the full dataset.</p><p><strong>To get started:</strong> Go to claude.ai, start a new project, and open Claude Code. You need a paid plan (Pro or Teams).</p><p><strong>A note on data prep:</strong> Export from your CRM or survey platform as CSV. Column headers matter &#8212; be descriptive. &#8220;NPS_Score&#8221; is clearer than &#8220;Q1.&#8221; &#8220;Days_to_Churn&#8221; is clearer than &#8220;Col_F.&#8221; Claude Code reads headers to understand what it&#8217;s working with, and better headers produce better analysis.</p><div><hr></div><h2>Step 2: Run Your NPS Verbatim Analyzer</h2><p>This is the workflow most CS teams have never done systematically &#8212; not because they don&#8217;t want to, but because reading 500 free-text responses and extracting patterns from them takes a full day of focused work that never gets scheduled.</p><p>Export your NPS survey results: respondent name or account name, NPS score (0&#8211;10), and the open-text response. Paste the full dataset into Claude Code. Then:</p><pre><code><code>I lead a Customer Success team and I need to extract meaningful insights from our NPS
survey responses. I've pasted our full survey results below &#8212; [X] responses with scores
and verbatim feedback.

Please analyze this data and produce:

1. The top 5 themes by frequency across all verbatims &#8212; what are customers actually
   talking about most? For each theme: label, frequency count, and 2&#8211;3 representative
   quotes.

2. Verbatims that predict churn &#8212; look for responses where a low score (0&#8211;6) is paired
   with specific language patterns: frustration with support, confusion about value,
   comparisons to competitors, mentions of evaluating alternatives. List these accounts
   and the specific language that flags them.

3. Verbatims that predict expansion &#8212; look for responses where a high score (9&#8211;10) is
   paired with language suggesting growth potential: references to team growth, new
   use cases, wanting more features, mentioning other departments. List these accounts
   and what specifically they said.

4. The top 3 product gaps mentioned most frequently &#8212; features or capabilities customers
   say they need that they're not getting. Include a rough frequency count and the
   clearest quote for each.

5. One coaching insight for the CS team &#8212; something in the verbatims that suggests a
   pattern in how customers feel about their relationship with us, separate from product
   feedback. Be direct.

Format the output with section headers. I'll be presenting this at our quarterly CS
readout.

[PASTE YOUR NPS DATA HERE]</code></code></pre><p><strong>What you get back:</strong> A structured analysis you could present in 20 minutes. The churn-risk account list alone is usually worth the exercise &#8212; three or four accounts flagging language that nobody had connected to a risk signal before, because nobody had read all the responses in the same sitting.</p><div><hr></div><h2>Step 3: First Result</h2><p>This is where CS teams usually go quiet for a minute.</p><p>The NPS verbatim analysis comes back and it says something specific &#8212; not &#8220;customers want better support&#8221; but &#8220;seven accounts mentioned the same onboarding confusion, and four of them are coming up for renewal in Q2.&#8221; Not &#8220;expansion opportunity exists&#8221; but &#8220;these three accounts used language suggesting they&#8217;re growing into a use case you haven&#8217;t sold to them yet.&#8221;</p><p>That specificity matters. NPS has always been a leading indicator for churn and expansion. The problem is the signal was buried in text that nobody had time to read systematically. Claude Code just read all of it in under a minute.</p><p>Take what comes back seriously. Share the churn-risk list with your CS managers before the readout. The accounts on that list deserve a proactive outreach in the next two weeks, not a response plan after they call to cancel.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-code-for-customer-success">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Claude Code for Revenue Operations]]></title><description><![CDATA[Your CRM knows where your deals are dying. You know it knows. You just don&#8217;t have time to ask.]]></description><link>https://www.gtmaipodcast.com/p/claude-code-for-revenue-operations</link><guid isPermaLink="false">https://www.gtmaipodcast.com/p/claude-code-for-revenue-operations</guid><dc:creator><![CDATA[J Moss]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:38:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6eca2d97-ccf7-46d2-92bf-536eb22dd9a2_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Your CRM knows where your deals are dying. It knows which rep&#8217;s pipeline is real and which rep&#8217;s pipeline is theater. It knows your stage-2-to-3 conversion rate has been declining for two quarters. It knows you have 847 open opportunities that haven&#8217;t been touched in 90 days.</p><p>You know it knows. You just don&#8217;t have time to ask.</p><p>That&#8217;s the actual RevOps problem. It&#8217;s not a data problem &#8212; you have too much data. It&#8217;s a synthesis problem. The answers are in there. Getting them out requires either a data analyst you don&#8217;t have, a BI tool that takes three months to configure, or four hours of pivot tables on a Friday afternoon.</p><p>Claude Code does the pivot tables.</p><div><hr></div><h2>What You&#8217;ll Build</h2><ol><li><p><strong>CRM Data Quality Audit</strong> &#8212; A prioritized report of exactly what&#8217;s broken in your data and which records to fix first</p></li><li><p><strong>Cohort Analysis</strong> &#8212; Win rates, ASP trends, and sales cycle changes by rep, segment, and quarter</p></li><li><p><strong>Funnel Conversion Model</strong> &#8212; Stage-by-stage conversion rates, leakage points, and the math on hitting number</p></li><li><p><strong>Revenue Forecast Model</strong> &#8212; A bottoms-up range (conservative/base/optimistic) from your current pipeline</p></li><li><p><strong>Process Documentation Generator</strong> &#8212; Clean process docs, RACI, and workflow diagrams from plain-English descriptions</p></li></ol><div><hr></div><h2>Step 1: Setup</h2><p>Claude Code runs at claude.ai &#8212; go to Projects, create a new project, open a conversation. No installation, no API configuration.</p><p><strong>Pre-work checklist for your CRM export before you start:</strong></p><ul><li><p>Remove or anonymize PII you don&#8217;t need (personal email addresses, direct phone numbers)</p></li><li><p>Check that column headers are clean and descriptive &#8212; &#8220;Close Date&#8221; not &#8220;Col_G&#8221;</p></li><li><p>Handle obvious nulls: decide whether blank fields mean zero, unknown, or N/A, and note that in your prompt</p></li><li><p>Make sure stage names in your export match your actual defined stages</p></li></ul><p>Export your data as CSV. Pull the full export &#8212; don&#8217;t filter it down first. The audit is designed to find what you don&#8217;t know to look for.</p><div><hr></div><h2>Step 2: CRM Data Quality Audit</h2><p>Run this first. It tells you exactly what&#8217;s broken before you build anything on top of it.</p><p>Export your full CRM contacts and deals/opportunities as CSV. Upload and paste this prompt:</p><pre><code><code>You are a RevOps data analyst. I've uploaded a CSV export of our CRM deals/opportunities.

Analyze the data and produce a prioritized data quality report that includes:

1. Fields with &gt;30% null or blank rates &#8212; list each field, the null percentage, and whether
   this field is critical for reporting
2. Stage name inconsistencies &#8212; identify any deal stages that don't match a standard
   progression (typos, deprecated stages, stages that shouldn't exist)
3. Potential duplicate company/account names &#8212; look for companies that appear multiple
   times with slight variations
4. Deals open longer than 180 days &#8212; list count, total pipeline value, and the oldest
   open date
5. Deals missing a close date or with a close date in the past that are still marked open

Format the output as:
- Executive Summary (3&#8211;5 bullets on the biggest data quality risks)
- Detailed Findings (one section per issue type, with specific counts and examples)
- Recommended Fixes (prioritized by impact on forecast accuracy)

Be specific. Include example record names or IDs where possible so I know exactly what
to fix.</code></code></pre><p><strong>What you get:</strong> A report that would have taken a data analyst two hours to build. Specific records to fix. A prioritized list based on what actually affects forecast accuracy.</p><div><hr></div><h2>Step 3: First Result</h2><p>Run the audit. Read the findings. Fix the critical-path issues &#8212; specifically anything affecting close dates, stage names, and owner assignment. Those three fields drive every model you&#8217;re about to build.</p><p>When your data is clean enough to trust, keep reading.</p><div><hr></div>
      <p>
          <a href="https://www.gtmaipodcast.com/p/claude-code-for-revenue-operations">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>