There's a gap between how much AI is being used at work and how little of it is showing up in company results. Workers say AI is saving them hours every week. Companies are reporting small gains, at best. Both things are true at the same time, and that's the problem worth thinking about.
Ethan Mollick, who teaches at Wharton and writes the One Useful Thing newsletter, named the dynamic clearly in a piece earlier this year. His argument: individual workers are getting real productivity gains from AI, but those gains aren't translating to organizational performance, because organizational performance requires organizational innovation, and the muscles for that have atrophied. He laid out a three-part framework for how to fix it: Leadership, the Lab, and the Crowd.
We think Mollick is mostly right. We also think the framework needs some translation for the kind of company most of our clients run — 30 people, or 80, or 200. Not Shopify. Not Duolingo. Not a Fortune 500 legal department with a head of AI strategy and a budget line for pilots.
Our approach is informed by Mollick's framework and evolved from it. Here's what each piece looks like when you're not a giant enterprise — and where the shape of the work has to change.
What changes at SMB scale
In a Fortune 500, Leadership, Lab, and Crowd are three different sets of people. In a 60-person company, they're often the same five people wearing different hats on different days. That doesn't make the framework wrong. It just means the work has to be done differently — more deliberately, with fewer people, in less time, with no budget to throw at it.
It also means the failure modes are different. A big company's AI initiative fails because of brittle workflows and weak integration. An SMB's fails because the founder got excited about an agent, bought a $30,000 platform, ran out of time to roll it out, and now nobody talks about AI in meetings because it's a sore subject. The constraints are different. The work is different.
Leadership: paint the picture, then keep painting it
Mollick's point on leadership is that urgency isn't enough. Your team doesn't need another memo about how AI is the future. They need a specific, concrete picture of what the next twelve months actually look like in your company. What will people's jobs feel like? Will efficiency translate into layoffs or growth? How will people be rewarded for using AI well? How will you know people are using it well? You don't have to know everything, but you have to have a direction you're willing to name.
For SMB leaders, this is mostly a discipline problem, not an information problem. The picture you paint on Monday gets contradicted by the cost-cutting conversation on Friday. People notice. The most damaging thing a leader can do right now is signal AI enthusiasm in public while privately treating it as a way to avoid hiring.
The companies we see doing this well aren't doing anything magic. They're saying out loud, and repeatedly, that productivity gains will be invested back into the team rather than turned into headcount reductions. They're modeling AI use themselves, in meetings, where everyone can see it. They're naming what they don't know. None of that requires a strategy document. It requires the founder or the leadership team to be willing to talk about this as adults, in front of their people, more than once.
The Crowd: where the real learning happens (and where it hides)
This is the part of Mollick's argument we think is most important and most under-explored for SMBs.
Here's what's already happening in your company, whether you've named it or not. According to MIT's State of AI in Business 2025 report (Project NANDA, July 2025), employees at over 90% of companies report regularly using personal AI tools for work — often without IT approval, often on personal accounts, often invisible to leadership. Only about 40% of companies have purchased official AI subscriptions, but employees aren't waiting. They're already crossing what the MIT researchers call the "GenAI Divide" through shadow use of consumer tools.
Mollick has a name for this: Secret Cyborgs. These are workers using AI well, in private, and not telling anyone — sometimes because they're worried about getting in trouble, sometimes because they're worried they'll stop getting credit for their work, sometimes because they suspect that if the company finds out how much faster they can work, the reward will be more work rather than more recognition.
Read that paragraph again, because it's the operational reality of AI in your company right now, no matter what your official policy says. There are more reasons for your team to hide their AI use than to share it. And until you change those incentives, the most productive people in your company will keep getting more productive in private, and you will keep wondering why your "AI initiative" isn't moving the needle.
The fix isn't a policy. It's a posture. Make experimentation safe. Be biased toward yes when people ask whether they can try something. Build incentives — real ones, not a Slack channel called #ai-wins — for the people who figure out transformational uses and share them. Reassure your team, in specific terms, that revealing their AI use will not lead to their job disappearing.
The companies that get The Crowd right at SMB scale do something else, though: they treat AI literacy as a baseline competence the whole team needs, not a specialization for the early adopters. The Secret Cyborg pattern thrives in companies where AI fluency is uneven and unspoken. It dissolves in companies where everyone is expected to be developing the skill, and where the conversations about what's working happen out loud.
The Lab: not a team, a habit
This is where SMBs have to most aggressively adapt Mollick's framework.
In Mollick's version, The Lab is a centralized group of subject matter experts and technologists who build prototypes, develop benchmarks, and turn the best discoveries from The Crowd into deployable tools for the company. That works at scale. It doesn't work at 60 people.
At SMB scale, The Lab isn't a team. It's a habit. Specifically, it's the habit of taking the prompts, workflows, and small tools your team is already discovering, and making sure they spread. A spreadsheet of prompts that worked. A 20-minute Friday demo where someone shows how they used Claude to clean up a contract. A small internal channel where useful agents get shared and refined. None of this needs a budget. It needs a person who owns it and fifteen minutes a week.
The benchmarking work Mollick describes — figuring out which AI models do your work best for your business at your tasks — matters even more at SMB scale, because you can't afford to be wrong about which tool to standardize on. The good news is that for most SMBs, "vibes-based" benchmarking is genuinely enough. Take the five tasks your team does most often. Try them in three different tools. Compare. Pick. Revisit in three months, because everything will have changed.
The piece of The Lab work that gets skipped most often is the building things that don't work yet piece. Mollick is right that this is where competitive advantage lives. The first company in your industry to have a working agent for client onboarding, even a clumsy one, will figure out the real problems six months before everyone else. That's worth doing badly now, not perfectly later.
When you do this deliberately: the AI Dojo
All of the above leaves an SMB leader with a coordination problem. Leadership paints the picture. The Crowd does the discovering. The Lab habit captures and spreads what works. Three things to hold together, no dedicated team to do it.
The approach we've developed at Kinetic Change is to collapse those three things into one container. We call it the AI Dojo.
The word is borrowed from martial arts, where a dojo is a practice space: somewhere skill is built through repetition, feedback, and real stakes. The AI Dojo applies that structure to AI adoption inside a company. A small team works on something that actually matters in the business, for four to eight weeks, with a coach embedded. The learning happens through doing the work, not alongside it. Real output ships at the end.
The design solves Mollick's framework in a different shape than he describes. The team going through the Dojo is The Crowd, doing the discovery. The artifacts they produce — prompt libraries, workflow templates, governance starting points, shared review norms — are what The Lab is meant to produce, generated as a byproduct of real work rather than by a separate group. And because alumni rotate back into their regular teams and carry the practices with them, the Dojo functions as a seeding mechanism for the whole organization rather than a silo for the most enthusiastic.
This is also the answer to the Secret Cyborg problem. The Dojo is designed around team practice, with psychological safety built in by the structure itself: "I don't know" is treated as the fastest path forward, not as a failure. AI use happens in the open, with a coach in the room, on work that matters. The conditions that drive hidden use don't show up.
We run two formats: one for software development teams, one for operations, product, marketing, and leadership. Both are grounded in evidence about what actually changes behavior in organizations, rather than vendor marketing about what AI is supposed to do.
The Dojo isn't the only way to do this work, and we'd be the first to tell you it isn't always the right starting point. But it's the shape of the answer we've found, after years of watching individual AI gains fail to become team gains.
Why we know how this goes
The pattern Mollick describes — high individual adoption, low organizational gain, gap closed by people figuring it out together — is not exclusive to AI. It's the pattern of every significant technology shift in the last twenty-five years.
Cloud computing followed it. So did mobile. So did DevOps, which started as a few people doing things differently in private and only became organizational practice once leadership figured out how to talk about it without scaring everyone. So did Agile, which most companies still don't do well, twenty years on, because the muscles for organizational learning were the bottleneck. Not the methodology.
At Kinetic Change, we've spent careers in those transformations. Erika has led agile and agentic AI programs at scale and consulted on transformation work at companies from startups to Fortune 100s. Fayoké has spent twenty years in design and strategy, building digital infrastructure for organizations adopting new ways of working. The thing both of us learned, in different domains, is the same thing Mollick is pointing at: the technology is rarely the hard part. The hard part is whether people feel safe enough to use the new tool well, and whether leadership is patient enough to let learning compound.
The companies that did well with cloud, mobile, DevOps, and Agile were not the ones with the biggest budgets or the most consultants. They were the ones who treated the rollout as an organizational learning problem and staffed it accordingly. The same will be true with AI.
What this looks like in practice
If you're a senior or middle manager at an SMB reading this and wondering what to actually do tomorrow, here's the short version.
Talk about AI in your team's standing meetings, not as a special initiative but as part of normal work. Ask your direct reports what they're trying. Make it boring to share. Pick one cross-functional process — onboarding, proposal writing, customer follow-up, financial reporting — and treat it as your first prototype. Don't buy anything yet. Spend three months learning where AI actually helps in your specific company before you sign a contract.
And expect this to take real time. Mollick's framework, our adaptation of it, and the experience of every previous technology transformation all point to the same thing: organizational learning is the bottleneck, and there is no way to shortcut it with a tool. There is, however, a way to do it deliberately, with people who have done this kind of work before.
That's what we do.