2026.06: New AI tools won't help if your people aren't ready
This week, three releases made the real AI problem impossible to ignore.
It's not about what AI can or can't do anymore. The technology works. The problem is the gap between what leaders want to achieve and what their people can realistically support. Harvard Business Review found that 45% of CEOs believe employees are resistant or hostile to gen AI, and most companies lack both a change management strategy and formal training to close that gap.
So this week, when Anthropic launched Cowork to handle cross-app workflows, OpenAI upgraded Deep Research to clear analysis backlogs, and quietly released Frontier to manage AI agents like digital employees, my question wasn't "Are these tools powerful enough?" It was: "Can your team actually adopt them without breaking?"
The pattern? The bottleneck shifted from technology capability to organizational readiness.
Let's break it down.
Signal:
Signal One: Cowork Handles the Handoffs. Remove the app-switching bottleneck.
Anthropic released Claude Opus 4.6 and a major upgrade to its Claude Cowork agentic AI tool. It reads files, organizes directories, composes documents, and completes complex workflows across multiple business applications. Think daily briefings that pull data from Slack, Notion, and GitHub, or research that turns into PowerPoint presentations and Excel workbooks without you toggling between six tabs.
Signal Two: Deep Research Clears the Analysis Backlog. Faster decision-making.
OpenAI upgraded ChatGPT's Deep Research feature with GPT-5.2, adding selective source control, real-time tracking, editable research plans, and a fullscreen report view. It's positioned as a controllable, business-ready research tool that competes with specialized analysis software.
Signal Three: Frontier Fixes the Deployment Bottleneck. Remove the "one-off " experiments.
OpenAI quietly launched Frontier on February 4, a platform for building, deploying, and managing AI agents as digital employees with identity, permissions, onboarding processes, and performance reviews. Unlike standalone chatbots, Frontier provides shared business context (data warehouses, CRMs, internal apps), parallel agent execution across real workflows, built-in evaluation and improvement tools, and governance controls (identity management, audit logs, access controls).
Scale:
Scale One: Cowork Handles the Handoffs. Remove the app-switching bottleneck
Start Here: Pick one recurring cross-app workflow where the person currently doing it manually is willing to pilot the change. Don't force it on resistant team members first. Run a side-by-side pilot where the team member reviews AI-generated outputs against their manual process, documents what "good" looks like, and defines which elements require human judgment. Grant read-only access to connected applications first. Track time saved, output quality (edits required), and, critically, employee confidence and buy-in for 2-4 weeks before expanding to additional workflows or team members.
Scale Two: Deep Research Clears the Analysis Backlog. Faster decision-making
Start Here: Start with one repeatable research type where an analyst champions the pilot and helps define what "good AI research" looks like versus what requires human verification. Build from buy-in, not mandates. Train analysts on evaluating AI research quality, validating sources, and adding strategic synthesis AI can't infer. Define source filters and report structure upfront, and keep human review for all strategic recommendations. Track research turnaround time, source quality, decision impact, and, critically, analyst confidence in using AI outputs for 30 days before expanding to additional research categories or team members.
Scale Three: Frontier Fixes the Deployment Bottleneck. Remove the "one-off " experiments.
Start Here: Choose one high-volume workflow where the team currently doing the work helps define which tasks agents should handle versus which require human judgment. Involve them in designing the change, not just experiencing it. Run a visible pilot where employees see every agent action. Provide training on reviewing agent outputs and handling exceptions, and commit to redeploying employees to higher-value work as the system scales. Connect agents to one system with read-only access first. Track output quality, cycle time reduction, exception rate, and, critically, employee adoption, confidence, and role evolution for 30-60 days before expanding agent permissions or deploying across additional teams.
Deep Dive:
The tools are ready (see last weeks Deep Dive). Your people aren't. A Kyndryl survey found 45% of CEOs believe their employees are resistant or hostile to gen AI. Meanwhile, 31% of workers admit to actively sabotaging their company's AI initiatives.
This week's Deep Dive digs into why new AI tools keep failing at the organizational level, what three psychological needs drive the resistance, and how companies like Siemens, Dell, and Moderna are building adoption that actually sticks.
Plus: the one conversation, one experiment, and one measurement you can run this week to find out if your team is ready.
Thanks for reading!
My hot take, pick one thing from the signals above and do something about it. Security audit. Workflow documentation. Infrastructure mapping. The work that matters isn’t sexy, but it’s the work that compounds.
See you next Friday!