You Don't Need an AI Strategy.
You Need to Know How Your Business Actually Works.
This is Part 1 of a 5-part series expanding on my paper - Five Myths About AI Transformation, where I unpack the patterns that keep repeating every time a new technology wave arrives. Each week, I'll take 1 myth apart, layer in the research, and give you something you can use on Monday. This week: the myth that started the whole paper.
I worked with a major Canadian airline during the early mobile era. The smartphone had arrived, and every business felt the pressure to build an app. They needed an app because everyone needed an app.
Nobody asked the first question: what does the customer actually need?
The airline's underlying systems weren't ready to provide the information needed to power an app. There was no clear signal about where to start. They ignored the client journey, jumped straight to "we need an app," and built web apps that performed terribly. No offline capability. Slow. Disconnected from the data systems the app depended on.
That was 2010.
Today, swap "app" for "AI" and the conversation is identical. Companies are swapping ChatGPT into broken workflows, bolting Copilot onto processes nobody has examined in a decade, and telling their boards they're "transforming."
The budgets are bigger. The stakes are higher. The script is the same.
The Myth:
Every Company Needs an AI Strategy
The reality: most companies need to fix their foundation before AI can help.
Stephen Andriole, a professor at Villanova who had served as CTO at multiple organizations and directed a technology office at DARPA, made this point in 2017. Not every company, process, or business model requires digital transformation. The same applies to AI. Some companies genuinely do not need AI right now. What they need is to fix the systems they already have.
I unpacked this in [Five Myths About AI Transformation](https://jasontate.ca/blog/five-myths-about-ai-transformation) alongside Donella Meadows' systems thinking and the SAMR model from Dr. Ruben Puentedura. But I want to go deeper. Because in the 18 months since AI spending went vertical, the data has gotten ugly. And the business theory from the last decade explains why companies keep running into the same wall.
The $37.5 Billion Lesson
Let me show you what "fix the foundation first" looks like when you ignore it at scale.
In January 2026, Microsoft reported its quarterly earnings. They disclosed 15 million paid Microsoft 365 Copilot seats. On the surface, that sounds massive. Satya Nadella called it "record AI momentum" and said Copilot is "becoming a true daily habit."
Here's the part he didn't say out loud: Microsoft has 450 million commercial Microsoft 365 users. 15 million paid Copilot seats is 3.3% of that base. After 2 years on the market. After $37.5 billion in capital expenditure in a single quarter, directed heavily at AI compute.
It gets worse. Recon Analytics surveyed more than 150,000 enterprise users in January 2026. When people had access to Copilot, ChatGPT, and Gemini, 70% initially preferred Copilot. It was right there in the apps they already used. But after trying alternatives, only 8% kept choosing it. Copilot's paid subscriber share dropped from 18.8% to 11.5% between July 2025 and January 2026. That's a 39% contraction in 6 months.
Even Microsoft is walking it back. Reports surfaced that the company is reevaluating how aggressively it pushes AI features in Windows 11, with plans to remove or simplify features where usage doesn't justify the investment.
This is not an AI strategy problem. This is a foundation problem. Companies bolted Copilot onto dirty data, broken processes, and workflows nobody had examined. A Gartner analyst told a 2025 conference that finding ROI from Copilot to justify full-scale deployment is "quite challenging," and most organizations were pausing to see where it makes sense.
The analysts at SemiAnalysis captured it best: "Claude for Excel is effectively what Copilot for Excel should have been, but it was launched by an external party on Microsoft's own first-party product." An outside competitor shipped a better AI experience on Microsoft's own application than Microsoft's $30-per-seat product could deliver.
That's what happens when you skip the foundation.
The Numbers Nobody Wants to Talk About
Microsoft's Copilot struggles are a symptom. The disease is everywhere.
MIT's Project NANDA published a study in July 2025 called "The GenAI Divide: State of AI in Business 2025." They reviewed over 300 publicly disclosed AI deployments, interviewed representatives from 52 organizations, and surveyed 153 senior leaders. The headline: 95% of enterprise AI pilots delivered no measurable impact on profit and loss.
Between $30 billion and $40 billion in enterprise investment. 95% of it producing zero return.
The core issue wasn't the technology. MIT's researchers found that enterprise AI systems didn't retain feedback, didn't adapt to context, and didn't fit into day-to-day workflows. A CIO they interviewed said it plainly: "We've seen dozens of demos this year. Maybe 1 or 2 are genuinely useful. The rest are wrappers or science projects."
Gartner put a number on the data problem in February 2025. 63% of organizations either don't have or aren't sure they have the right data management practices for AI. Their prediction: through 2026, organizations will abandon 60% of AI projects because the data isn't ready.
RAND Corporation measured an 80% overall AI project failure rate. But here's the number that should make every CEO sit up: projects with sustained executive involvement achieved a 68% success rate. Projects that lost executive sponsorship within 6 months? 11%. The difference between those 2 numbers is not technology. It's commitment.
Deloitte's 2026 State of AI report found that 42% of companies believe their strategy is highly prepared for AI adoption. But those same companies feel less prepared when it comes to infrastructure, data, risk, and talent. Strategy confidence outrunning operational readiness. That gap is the myth, alive and breathing.
The Substitution Trap
Dr. Ruben Puentedura developed the SAMR model to describe how organizations adopt technology. Substitution is swapping 1 tool for another with no functional change. Augmentation adds some improvement. Modification redesigns the work. Redefinition does something that wasn't conceivable before.
Most companies are stuck at Substitution. At best, they reach Augmentation. That's where "digital transformation" stopped for the majority of organizations I've worked with. AI is following the same script.
Sangeet Paul Choudary, a C-level advisor and senior fellow at UC Berkeley, published a piece in Harvard Business Review in February 2026 that cuts right to this. He studied companies like Figma and Shein, and his argument is direct: incumbents fail because they bolt new technology onto old structures instead of rethinking how work gets done.
His Figma vs. Adobe case is instructive. Adobe did everything right by conventional standards. Moved to the cloud. Subscription model. Built collaborative features. Still lost. Figma didn't just change the tool. It changed the unit of work, from the file to the element inside the file. That meant multiple people could work on the same design simultaneously without creating conflicting versions. Adobe changed the delivery mechanism. Figma changed how design work was organized.
That's Substitution vs. Modification on the SAMR scale. And it's the pattern I see in almost every consulting engagement.
Choudary's conclusion: "The question executives must ask today is not whether they're adopting AI aggressively enough but whether the basic structure of work inside their organization still makes sense, given what AI now makes possible."
That's the same question I keep asking in slightly different words: can you describe, in detail, how your business actually works right now?
Why Companies Keep Skipping This Step
Meadows would recognize every one of these failures. She wrote that you can't improve a system you can't describe. If your employees can't articulate how work actually flows through your organization, adding AI will only magnify the mess.
But there's a deeper reason companies skip the foundation. McKinsey has tracked this for years. Large-scale transformation efforts face a 70% failure rate. The 2 most common causes: lack of leadership commitment and inadequate cross-functional collaboration. Not technology shortfalls. People shortfalls.
And there's a psychological dimension. McKinsey's Digital Quotient research found that 93% of executives believe digital is critical to achieving their strategic goals. At the same time, 9 out of 10 CEOs believe their organization is not ideally set up for it. They know the gap exists. They know they're not ready. And they push forward anyway because the pressure to "do something with AI" overwhelms the discipline to do it right.
Clayton Christensen explained this pattern decades ago. Incumbents focus on improving products for their most demanding customers. They overshoot the needs of some segments and ignore others entirely. The focus on existing customers becomes locked into internal processes that make it difficult for even senior managers to shift investment. That's not just true for products. It's true for how companies think about technology adoption. They keep investing in what they know, what the existing structure supports, and what the current customers demand. The foundation work isn't sexy. It doesn't show up on a quarterly earnings call. Nobody gets promoted for cleaning up data.
Choudary identified this as "architectural self-preservation." Units of work define roles, expertise, and status inside organizations. Changing them redistributes influence, away from those who manage approvals and toward those who design new ways of organizing work. Leaders who sense resistance often mislabel it as cultural inertia when it's actually the system protecting itself.
What To Do Instead:
If you're running a company, and someone has told you that you need an AI strategy, here's what I'd tell you instead.
Step 1: Map your actual processes.
Not the ones in the documentation nobody has updated since 2019. The real ones. How does work actually flow through your organization? Where does information get stuck? Where do people do the same task twice because 2 systems don't talk to each other?
You can't improve what you can't describe. Meadows was adamant about this. If you skip this step, everything that follows is guessing.
A Microsoft survey of 500 enterprise decision-makers across 13 countries found that only 22% strongly agreed their organization has clearly documented key processes and data dependencies. If you're in the other 78%, start here. Not with AI.
Step 2: Ask the Puentedura question.
For each process you've mapped, ask: are we doing Substitution, Augmentation, Modification, or Redefinition?
If you're swapping 1 tool for another and the work looks the same, that's Substitution. If you're adding a feature that makes the work slightly easier, that's Augmentation. Neither of those is transformation. They're maintenance.
The real question is: does this technology make it possible to do something that wasn't conceivable before? If yes, you might be looking at Modification or Redefinition. That's where the return lives. But you can't get there if you skip Step 1.
Step 3: Fix the boring stuff first.
Connect 2 systems that don't talk to each other. Remove manual steps from a workflow that hasn't been updated in a decade. Build a simple process that handles routine decisions so humans can focus on the exceptions.
This is where the MIT research is actually encouraging. The 5% of companies that succeeded with AI shared common traits: they picked 1 pain point, they executed well, and they built on existing workflows instead of trying to replace them. The highest ROI came from back-office work, document processing, compliance, and internal workflows. Not the flashy customer-facing stuff that gets the budget.
Step 4: Clean up your data.
63% of organizations either don't have or aren't sure they have the right data practices for AI. Your AI is only as good as the data you feed it. If your data lives in 7 different spreadsheets that nobody reconciles, fix that before you buy anything new.
Step 5: Pick 1 problem. Solve it. Measure it.
Not 10 problems. 1. Choose the process with the most friction, the best data, and the highest visibility. Solve it. Measure the before and after: hours saved, errors reduced, capacity created. Make that win visible.
The RAND data shows that projects with sustained executive involvement succeed at 68% vs. 11% without it. You get sustained involvement by showing a win early. A single win buys you the credibility to do the next 1.
The First Honest Question:
The first honest question is not "what's our AI strategy?"
It's "can we describe, in detail, how our business actually works right now?"
If the answer is no, start there. The AI can wait.
And if someone tells you that waiting means falling behind, remind them of Microsoft. $37.5 billion in a single quarter. Copilot bolted onto 450 million seats. 97% of those users decided it wasn't worth paying for.
The technology worked fine. The foundation wasn't there.
This is Part 1 of a 5-part series on Five Myths About AI Transformation(https://jasontate.ca/blog/five-myths-about-ai-transformation). Next week: Myth 2, "AI Is the Technology That Changes Everything," and why the biggest wins still come from boring, proven technology applied to the right problem.
I break down frameworks like this every week in From Signal to Scale, my weekly newsletter. Three signals from AI, automation, and tech. No hype. No buzzwords. Just the stuff that actually matters if you're running or building a business.
If this was useful, you'll like what shows up on Fridays.
If you're not already reading Signal to Scale, that's where I share tools and approaches like this every Friday. [Subscribe here]
Sources:
- Andriole, S.J. (2017). "Five Myths About Digital Transformation." MIT Sloan Management Review.
- Choudary, S.P. (2026). "Why New Technologies Don't Transform Incumbents." Harvard Business Review.
- Gartner (2025). "Lack of AI-Ready Data Puts AI Projects at Risk." Gartner Newsroom.
- Meadows, D.H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.
- Microsoft FY2026 Q2 Earnings Call (January 2026).
- MIT Project NANDA (2025). "The GenAI Divide: State of AI in Business 2025."
- Puentedura, R. (2006). SAMR Model.
- RAND Corporation (2025). AI Project Failure Analysis.
- Recon Analytics (2026). Enterprise AI Subscriber Survey, 150,000+ respondents.
- SemiAnalysis (2026). Microsoft AI Analysis.
- Deloitte (2026). "The State of AI in the Enterprise."
- McKinsey Digital Quotient Benchmark.
- Christensen, C.M. et al. (2015). "What Is Disruptive Innovation?" Harvard Business Review.
- Syncari (2026). "What AI-Ready Data Foundations Require in 2026." Microsoft survey of 500 enterprise decision-makers.