Jack Dorsey just cut Block’s workforce nearly in half and said AI made it possible.
45,000 tech workers lost their jobs in Q1. Over 9,200 of those layoffs were directly attributed to AI and automation. Amazon alone accounts for 30,000 — flattening management layers, removing entire functions.
And if you listen to the narrative, it sounds clean. Efficient. Inevitable.
I’ve been scaling companies for 21 years. I’ve restructured teams, made painful cuts, built operations from the ground up in industries where a single process failure could mean a patient didn’t get treated. So I’m not going to pretend layoffs are never necessary.
But what I’m watching right now isn’t strategic transformation. It’s panic with a press release.
Source: Guardian
The “Cut and Pray” Playbook
Here’s the pattern I keep seeing: a company deploys AI tools, sees some productivity gains in a pilot, and then a CFO somewhere decides that means they can cut 40% of a department.
No workflow redesign. No process reengineering. No investment in the operational infrastructure that makes AI actually work at scale.
Just fewer people doing the same jobs with a chatbot bolted on.
I watched a similar version of this play out 15 years ago with offshoring. Companies moved entire functions to lower-cost geographies, declared victory on the earnings call, and then spent the next three years dealing with quality collapse, knowledge loss, and customer churn that quietly ate the savings.
The companies that got offshoring right? They redesigned their operations first. Then moved the work. The order matters.
The Silent Failure Nobody’s Tracking
And here’s where it gets really dangerous. CNBC ran a major piece last week on “silent failure at scale” — the phenomenon where AI systems degrade, hallucinate, or make compounding errors that nobody detects until the damage is done.
MIT research shows 91% of ML models degrade over time. Gartner says 67% of enterprises see model degradation within 12 months. The EU AI Act will require continuous monitoring by August 2026.
Now combine those two facts: companies are cutting the humans who used to catch errors AND deploying AI systems that silently degrade. If that doesn’t keep you up at night as an ops leader, you’re not paying attention.
When I was building systems in healthcare, we had a concept we lived by: every automated process needs a human circuit breaker. Not because the automation was bad — but because no system is perfect, and in healthcare, the cost of an undetected failure isn’t a bad customer experience. It’s a bad outcome for a patient.
Most tech companies deploying AI don’t have circuit breakers. They have dashboards that show throughput and cost savings. Those are not the same thing.
What Actually Works: The Operator’s Framework
I’ve restructured operations three times in my career where the goal was “do more with fewer people and better systems.” Here’s what I learned:
1. Redesign the workflow before you cut the role.
If you’re just removing a human from a process and hoping AI fills the gap, you’re going to get the AI-equivalent of a junior employee with no training and no supervision. Map the actual workflow. Identify which steps genuinely benefit from automation. Build the new process. Then staff it.
2. Invest in observability before you invest in automation.
You need to know when your AI is wrong before you can trust it to run at scale. That means building monitoring, quality checks, and escalation paths. OpenAI just acquired Promptfoo — a security and red-teaming company — specifically because they realized their own AI agents weren’t production-ready without it. If OpenAI needs that infrastructure, so do you.
3. Redeploy, don’t just reduce.
The best operators I’ve worked with didn’t just cut 40% of a team. They moved 25% to higher-value work that the AI couldn’t do, automated 30% of the tasks (not the people), and upskilled 15% to manage the new AI-augmented workflows. The net headcount reduction was real — but it was a byproduct of redesign, not the starting point.
4. Measure what actually matters.
Headcount reduction is not a KPI. It’s a vanity metric dressed up as efficiency. The real questions: Is cycle time improving? Is error rate stable or declining? Is customer satisfaction holding? Are you catching failures before they compound?
I’ve seen companies trumpet a 35% headcount reduction and then watch NPS drop 22 points over six months. That’s not efficiency. That’s a slow-motion implosion.
The Real Opportunity
Here’s what frustrates me about the current narrative: the opportunity is genuinely massive. AI-augmented operations can be transformational. I’ve seen small teams outperform organizations three times their size when the workflows, tooling, and monitoring are right.
But “cut half your workforce because AI” is not a strategy. It’s a headline.
The companies that will win the next five years aren’t the ones cutting fastest. They’re the ones rebuilding their operations from the ground up — with AI as a core architectural decision, not a cost-cutting tool.
And if you’re an operator sitting in a planning meeting where someone says “we can replace that team with AI,” ask one question: “What’s our monitoring plan for when the AI gets it wrong?”
If the room goes quiet, you have your answer about how ready they really are.
One Thing to Try This Week
Pick one AI-automated workflow in your org. Run a manual quality audit on its last 100 outputs. Not a dashboard check — actually review the work. I’d bet real money you find error rates higher than anyone expected. That’s your starting point for building the observability layer your operation actually needs.


