You tried everything.
You encouraged. You equipped. You sat back, ready to enjoy the agentic explosion that was surely coming.
The result? Nothing happened.
Give your designers a lovable AI-generated prototype — they go back and redraw it in Figma. Encourage your developers to build with AI — they spend more time fixing the problems AI introduced than they saved. Show the amazing possibilities to your management team — they look back at you with glazed-over eyes.
It’s not like anyone is against AI.
But the opportunity is perfectly shaped like a really sticky problem.
You’ve seen the solo builders. Shipping incredible products from their basements. One person doing the work of ten. Surely that scales to your team of thirty, right?
But it doesn’t. The adoption just doesn’t happen.
The last few days, I’ve been invited to support three major Swedish brands — different industries, different teams — and I see the exact same pattern.
We are missing the why on an organizational scale.
Here’s what most companies get wrong: AI adoption doesn’t start with tools. It doesn’t start with training. It doesn’t even start with enthusiasm.
It starts in the boardroom.
If the opportunity isn’t shaped like a document where management can make a formal decision — one that will fundamentally change the company’s output and shield it from hairy consequences — there will be a severe drop in your mandate to take advantage of the new technology.
And this is the part of the story solo builders never have to think about.
“But governance sounds like lawyer stuff.”
Hear me out. Because solving this piece of the puzzle might just change the game.
AI governance is simply this:
Compliance — how the company adheres to the policies you put in place. What happens when there’s a breach or a variation.
Autonomy — what can your team members do by themselves? Together with their team? What needs to be escalated? Without this framework, people hesitate. They look at you and wait to be told what to do. And as long as that’s happening, you won’t get anything like a proactive, innovative AI workflow in place.
Policies — GDPR, WCAG, incident handling, down to the last team member. If you haven’t properly covered how you’re going to respect these, everyone walks on eggshells.
Think about it: just because you give each of your team members a chainsaw doesn’t mean the trees will fall in the right direction.
With the right framework in place, something beautiful happens.
Policies. Acknowledgment. Budgets. A chosen methodology that guides your team in making sound decisions.
Now you can equip your teams and send them off into space. They report back when they’ve solved the problem, collected the data, and can prove their solution — not with opinions, but with numbers.
And here’s what nobody is talking about: the knowledge.
When your team adopts AI in full swing, an enormous amount of knowledge is created. Every experiment. Every pattern that didn’t work. Everything that succeeded. Today, this is either ignored entirely, or — best case — stored in your chosen vendor’s catacombs in a form you don’t control.
With the right infrastructure, all of this becomes a semantic, agent-readable environment that supercharges every future agent conversation. A goldmine that compounds over time.
So here’s the uncomfortable truth, design leader.
Three layers of reality:
IT and management can turn on the language model. But they don’t know what you need, and they don’t know what the risks are. So how can they give you the formal go-ahead?
Your individual team members can be courageous and start deploying AI all day. But if they’re responsible — and most people in organizations are — they won’t take initiative without a formal mandate, proper guardrails, and clarity on what’s expected of them.
That leaves you.
You understand both the opportunity and the organizational reality. You see what’s possible and what’s blocking it.
Nobody else in the organization will initiate the governance and compliance initiative. It has to be you.
I’ve had the honor of guiding teams through this exact transition. I’ve built the tools — ready-made agents and skills that not only help create the policies and framework, but use each policy as a recipe to develop the necessary support structure. Infrastructure that guides all decisions and releases back to who, where, and when — in case of an incident. And infrastructure that captures the goldmine of knowledge your team will produce.
If your team is stuck in the gap between AI enthusiasm and AI execution — I’d like to help.
What’s blocking AI adoption in your team? Tell me in the comments.