The Hidden Risk of AI at Work: Faster Individuals, Fragmented Teams
A lot of organizations think they have an AI problem because adoption feels uneven. Some teams move quickly. Others lag behind. Progress looks scattered.
But in many cases, that is not really a capability problem.
It is a propagation problem.
The issue is not whether people can use AI. It is whether what they learn, build, and discover actually spreads across the organization.
How I Got Here
I did not start with AI.
I was trying to understand why some teams feel fast, creative, and connected, almost like they are solving problems together, while other teams with similar talent feel slow and fragmented.
That led me to Social Physics: How Good Ideas Spread by Alex Pentland , which makes a simple but powerful point:
Innovation does not just come from smart individuals. It comes from how ideas move through a group.
The framework centers on three things:
- exploration: finding new ideas
- engagement: improving ideas together
- social learning: spreading what works
At first, that felt interesting. Now, working with AI, it feels essential.
What I Keep Seeing
In my organization, we have strong people across domains, and yet the same patterns keep showing up:
- different teams are running similar AI experiments
- similar tools are getting built more than once
- the same mistakes keep getting repeated
That is not because people lack skill or initiative. It is because ideas are not moving.
Social Physics Makes the Problem Easier to See
When I look at this through the lens of Social Physics, the problem becomes much clearer.
Exploration is happening.
People are trying things. AI makes experimentation fast and easy.
Engagement is weak.
Ideas often get shared too late, after something is already built, rather than while it is still taking shape.
Social learning is even weaker.
The people getting the most out of AI are not always passing along how they think, how they work, or how they got there. So others do not catch up.
The result is an organization that looks busy and innovative on the surface, but does not actually improve as a system.
How AI Can Quietly Fragment a Team
In a healthy environment, ideas move through a cycle. People explore them, refine them together, and then spread them quickly.
AI can break that cycle.
Exploration becomes private, happening in the space between specific groups and the model.
Engagement comes too late, once the output is already finished.
Social learning becomes uneven, with a small group moving fast while everyone else struggles to follow.
So instead of strengthening collective intelligence, AI can quietly fragment it.
Where I Had It Wrong
For a while, I focused mostly on individual productivity.
Learn faster. Build faster. Move faster.
AI made that very easy.
But over time, I realized I was only improving one part of the system. I was increasing exploration, but not really contributing to engagement or social learning.
So the system was not improving. I was.
What I Changed
I started approaching things differently.
I began doing more live demos. I walked people through prompts and workflows. I showed failed attempts, not just polished results.
Not because it was the most efficient thing to do in the moment.
Because it helped restore what was missing.
It created more engagement by letting people think through problems together. And it improved social learning by making the process visible, not just the outcome.
The goal shifted from:
“Look what I built.”
to:
“Here’s how this works, and here’s how you can use it too.”
That is how ideas start to propagate.
Rethinking What Good Performance Looks Like
AI makes it easy to focus on outliers.
The person who gets ten times faster. The one building tools quickly and pulling ahead.
I understand the appeal. But that is not the real goal.
A team does not win because one person becomes 10x more productive. It wins when ten people can move at 2x speed together.
That only happens when people are learning from each other, improving ideas together, and spreading useful practices quickly.
When propagation works, the whole system moves.
The Real Shift Teams Need to Make
Most organizations are trying to adopt AI.
Far fewer are protecting the conditions that make adoption useful at scale:
- exploration
- engagement
- social learning
That is the real risk.
Not a lack of capable people.
Not even a lack of good tools.
If we do not fix how ideas move, AI will not make organizations smarter in any meaningful way.
It will make some individuals much faster. They will learn quicker, build quicker, and produce more. But if that learning stays trapped with them, the broader organization does not get stronger. It gets more uneven.
That is the hidden risk of AI at work.
Not that people will fail to use it. But some will race ahead while teams become more fragmented around them.
Faster individuals. Fragmented teams.
If that pattern continues, organizations will not get the full value of AI. They will get isolated gains without shared progress. And over time, that gap becomes harder to close.