1/16/26: The 5-Part AI Framework That Saves GTM Teams 15+ Hours Weekly
Welcome as per usual to our weekly Podcast and newsletter! This week our own Jonathan Moss goes through a 5 part AI Framework using several easy to use AI tools that saves a TON of time to the tune of 15 hours.
You can also access the NotebookLM that not only has the deep dive but is a guide on using these resources in very interesting ways.
We also are doing our 2026 GTM AI Report! We had 300+ participants last year and we would LOVE your insights in this.
The goal of this report is to get a VERY real idea of where things are right now in the GTM AI space. We will cover:
AI Agents
-Prompting
--AI Tools
---AI Strategy
----Challenges or problems
-----Patterns across company sizes and industries
----What to do in 2026
---Insights into what is coming next
--Practical use case guide
-REAL results from people in the market
Please click on this link and then take 5-10 min talking to the AI Agent (instead of doing a google form) and I will send you the full report after we get everything analyzed!
Lets get into it!
You can go to Youtube, Apple, Spotify as well as a whole other host of locations to hear the podcast or see the video interview.
The 5-Part Prompt Framework That Replaced Our Workflow Tools
Everyone’s building complex automations in Zapier and N8N. We’re just asking AI to do it. The results are identical, but we shipped in 10 minutes instead of 3 hours.
In this week’s GTM AI Podcast, JMo walked through the evolution from structured prompting frameworks to simple workflow automation using just natural language. What used to require connecting APIs and debugging integrations now happens by describing what you want. Here’s what actually matters:
1) Prompt structure is simpler now, but context discipline matters more.
LLMs got smart enough that you don’t need role-playing prompts anymore. No more “You are an expert marketing strategist with 20 years of experience...” None of that. But you still need to give them the right context in the right order.
The framework that works:
Objective - Define success with measurable outcomes. Not “help me with marketing” but “get 10,000 signups and 1,000 paid customers in 6 months with CAC under $150.” The LLM needs to know what winning looks like.
Constraints - What must and must not happen. Budget limits, scope boundaries, data recency requirements. Example: “Target SMBs 50-500 employees in tech marketing, North America only, data from last 12 months.” Without this, you get outputs you can’t actually execute.
Inputs - What data you’re providing. Product one-pagers, market research, budget docs, or even just assumptions if you don’t have the data yet. This tells the LLM what it’s actually working with.
Output - Exact format you want. “2000-3000 word GTM plan in professional tone with sections on target audience, value prop, pricing, channels, KPIs.” Be specific about structure.
Evaluation - Make it check its own work. “Verify budget alignment, consider competitive scenarios, ask clarifying questions.” You never take first output at face value.
The result: I created a complete GTM strategy prompt using this framework in under 2 minutes. The LLM generated the full strategy, checked it against constraints, and surfaced gaps. No back-and-forth iterations needed.
2) Data analysis went from “upload to specialist tool” to “ask AI what you want to know.”
I uploaded two HubSpot Excel exports. Simple prompt: “Analyze this CRM data. Provide two years of trends, key insights, next best actions, and visualizations.”
What came back:
Critical alert: revenue declining 42%, win rates dropping
Executive dashboard with trends across deal volume, win rates, revenue by product
Segment and regional performance breakdown
Detailed action plan with week-by-week emergency response
Then I asked it to build an interactive dashboard. Got a fully functional web-based viz I could publish and share via link. Anyone with the URL can access it, no Claude account required.
The strategic shift: Your RevOps team doesn’t need specialized BI tools or data analysts for exploratory analysis anymore. The bottleneck between “I have data” and “I have insights” collapsed. What took a data analyst 6 hours to build now takes 5 minutes of natural language prompting.
Time saved on that analysis: 4-6 hours. Cost saved: $500-800 contractor rate. Quality: comparable for initial insights, then you can refine.
3) Workflow automation is now as simple as describing what you want done.
I wanted to analyze competitors in the patient engagement market. My prompt:
Research these 5 competitors
Create detailed competitive analysis comparing them to our product
Build an infographic, slide deck, and interactive website
Email the summary to my colleague with attachments
Schedule a 3pm meeting to discuss
That’s it. That was the entire “workflow build.” No Zapier. No Make. No N8N. Just told Manus what I wanted.
What it delivered:
Full competitive analysis with target markets, pricing, features, differentiators
Professional infographic with comparison matrices
15-slide presentation deck (editable in-browser)
Interactive website I could publish instantly
Draft email with both attachments, ready to send
Calendar invite with agenda already written
Execution time: 18 minutes from prompt to all deliverables ready. Traditional workflow tool setup: 2-3 hours minimum, plus debugging.
The workflow tools (Zapier, Make, N8N) aren’t dead. They’re still the right choice for enterprise-scale automation with complex error handling. But for one-off tasks and recurring workflows under 10 steps? Just prompt it. The AI will handle the connections if your tools have connectors set up.
4) The skill that matters now is knowing what’s possible, not how to build it.
The session demonstrated something critical: non-technical people can now ship work that required specialists six months ago. RevOps leaders building custom analytics without Python knowledge. Marketing ops creating interactive dashboards without design tools. Sales ops automating competitive research without workflow engineering.
The constraint shifted. It’s not “can I build this?” anymore. It’s “do I know this is buildable?”
That’s why we’re running these sessions. Not to teach tool mechanics, but to show what’s in the realm of possible. Once you see a competitive analysis generated and formatted into three content types with email delivery in 20 minutes, you know to ask for it next time. The limitation isn’t technical capability. It’s knowing what to request.
Practical applications we showed:
Prompt frameworks for consistent output quality
Data analysis with auto-generated visualizations
Multi-step workflows with no coding required
Content generation across formats simultaneously
Most important: we demonstrated everything using consumer AI tools (Claude, ChatGPT, Manus). No enterprise licenses. No special access. Anyone can replicate this today.
Why this matters:
Your GTM team’s capability ceiling just disappeared. The question isn’t “what can we afford to build?” or “who has bandwidth?” It’s “what should we be doing that we’re not?”
The teams winning right now are the ones who stopped asking “how do I use this tool?” and started asking “what results do I actually need?” Then they’re just describing those results to AI and iterating until it’s right.
What to test this week:
Take one task that normally requires multiple tools or handoffs
Write a natural language description of the full workflow you want
Prompt Claude, ChatGPT, or Manus with it
Track time saved versus traditional execution
The shift isn’t about learning new platforms. It’s about describing outcomes instead of building processes. Your prompts are the new automation scripts. Make them specific, give them context, and check the output. That’s the entire skillset now.
Stop building workflows. Start describing what you want done.



