2026.19: Nobody Started With the Person.
Happy Friday friends! It is a long weekend in Canada…and I’m ready for the break!
There's a number in a recent MIT Sloan piece that I haven't been able to shake.
2.24 out of 10.
That's the average approval rating truck drivers gave to the AI-enabled cameras installed in their cabs. Cameras meant to improve safety. Cameras meant to help them.
Nobody thought to ask.
The camera was sold in a boardroom to a VP of Fleet Operations who needed better safety scores. The vendor demo was clean. The contract was signed. The camera went in the cab.
And the people who sit in that cab for 10 hours a day gave it a 2.24.
This is not a technology problem. It's a procurement problem. And it happens in every industry, every week, dressed up in the language of digital transformation.
Let's break it down.
Signal:
THE BUYER ISN'T THE USER. THAT GAP IS WHERE AI GOES TO DIE.
MIT Sloan's Ganes Kesari has spent years watching AI roll out across conservative industries, and his diagnosis is worth sitting with: AI doesn't fail because the technology is wrong. It fails because leaders underestimate the human and operational context in which AI tools are introduced.
Three blockers keep showing up:
AI feels inaccessible and scary.
AI looks like extra work.
AI benefits don't seem worth the pain.
All three are symptoms of the same root problem. The person who signs the contract is not the person who has to use the tool. The VP approved the camera. The driver lives with it.
Here's what that gap looks like in practice. The demo impresses the buyer. The UI confuses the user. The vendor builds a training program. Leadership schedules mandatory sessions. Adoption numbers come in low. The tool gets blamed. Nobody asks why.
I've written about the readiness gap before (From Signal to Scale - Issue 2026.06 is still worth your time if you missed it). But this is a harder problem than readiness. Readiness assumes the tool was right and the team wasn't prepared. What I'm describing here is buying the wrong tool entirely, because nobody building it started with the person.
There's a line from Kesari's piece that cut through everything else: "The cost of learning feels personal, but the benefits feel abstract and impersonal." Read that twice. The person being asked to change carries the cost. The executive who signed the deal claims the win. Nobody in that arrangement has the same stakes.
Here's the design question that enterprise AI procurement never asks: how should this make the person feel?
Not "what can it do?" Not "what does it integrate with?" Not "what's the ROI at 18 months?" How should it feel to the dispatcher at 6am, to the technician in the yard, to the driver 400 kilometres from home?
Apple started there. Every time. Jobs didn't begin with a feature list. He began with a feeling and worked backwards to the technology that could create it. The iPod wasn't "a 1GB storage device." It was a thousand songs in your pocket. Confident. Light. Yours. The technology served the feeling, not the other way around. When Apple got it right, you didn't need a training program. You needed about 90 seconds and a sense of wonder.
Enterprise AI does the exact opposite. It starts with capability, runs it through a procurement process designed for the buyer, and hopes that feeling follows somewhere downstream. The demo shows what the model can do. The contract specifies features. The implementation plan lists integrations. Nobody in that chain ever asked what it should feel like to the person doing the actual work. And if the answer was "it should feel like surveillance," nobody said it out loud.
That's how you get a 2.24.
If you need a training program to get people using your product, the product is already broken. That's not a harsh take. That's design. Face ID didn't come with a manual. Uber didn't need a certification course. Good design is invisible. The minute you're scheduling mandatory sessions to explain a menu to a warehouse worker at the end of a 10-hour shift, you've already lost. You just haven't admitted it yet.
The real blocker isn't resistance to AI. It's change fatigue, compounded by a decade of technology rollouts that promised everything and delivered a worse version of the old process with a new login. People aren't wrong to be skeptical. They've been here before.
Scale:
FOUR MOVES TO CLOSE THE BUYER/USER GAP
The fix isn't complicated. It's just inconvenient for the people who control the budget.
1. Put users in the room during vendor selection. Not as observers. As evaluators. Give the dispatcher, the technician, and the driver a scorecard. Let them run part of the demo. The gap between what a vendor shows a VP and what a front-line worker actually needs surfaces fast when the right people are in the room. You want that tension before the contract, not after. It is significantly cheaper to kill a bad deal than to manage a failed rollout for 18 months.
2. Run a 30-day shadow pilot with your most skeptical users. Not your enthusiastic early adopters. Pick five people who represent your hardest cases and embed the tool in their actual workday. Watch where they stop using it. Where they work around it. Where they swear at it. That friction map is worth more than any analyst report or vendor case study. The places where skeptics give up are exactly where the product failed to start with the person.
3. Measure the problem, not the deployment. Most go-live dashboards track seats licensed, logins, and training completion. None of that tells you if the tool is working. Tie your success metric to the pain the tool was supposed to solve. If it was supposed to cut time on work orders, measure time on work orders. New KPIs trigger debate and delay action. Familiar metrics accelerate it. Connect AI value to the numbers your people already get judged on, and the conversation changes entirely.
4. Say thank you. I know how that sounds. Do it anyway. When a front-line worker adopts something new and it sticks, their manager rarely says a word. That silence reads as indifference. A five-minute conversation from a senior leader saying "I heard the new system is working for you, what do you think?" does more for adoption than a company-wide rollout email ever will. Pride is portable. People take their best work to where it gets recognized. Issue 18 made this point harder than I will here. Go read it if you haven't.
Deep Dive:
The organizations that get AI adoption right aren't the ones with the best tools or the biggest budgets. They're the ones who asked a simple question before signing anything:
At what point in our procurement process does the person who actually has to use this get a vote?
Not a survey. Not a focus group after the fact. A real vote, with the power to change the outcome.
If you can't answer that question cleanly, you already know why your last implementation struggled.
No deep dive this week. Just that one question on your next leadership agenda. It will do more work than a consultant's deck.
Thanks for reading!
The Kesari piece in MIT Sloan is worth your full attention. Go read it before you approve your next AI spend.
https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/
Where have you seen this go right? Where have you seen it go sideways? Drop a comment or send me a note at jt@jasontate.ca. Push back on anything I got wrong.
The newsletter isn't the conversation. The conversation is the conversation.
Have a great long weekend!
See you next Friday.
Best,
JT
PS - If someone forwarded this to you and you want it in your inbox directly, subscribe HERE.