The Overuse of AI and the Erosion of Judgment
How trust ambiguity, leadership design, and clarity determine whether AI strengthens or weakens teams.
Happy Friday everyone!
Back to my beloved routine after an intense week in Stockholm filled with conversations about tech, talent, and scalability in disruptive times.
I must confess, I do love a good routine.
So much so that I recently decided to disrupt my own. As I quickly realized how easy it is for the brain to get too comfortable. Well, not on my watch.
After four years working from my home office, I signed up for a co-working space, intentionally designed for interaction and spontaneous exchange.
My goal is simple: continue the journey uphill. Steady. Calmly. Consistently onward.
Showing up for myself has become the mantra.
Which brings me to AI.
The AI Conversation We’re Not Having Enough
Inspired by last week’s TechArena discussions — and a recent HBR article by Professor Amy Edmondson and Jayshree Seth on fostering psychological safety when AI erodes trust — I’ve been reflecting on a topic that has impacted me firsthand:
Is there such a thing as overusing AI?
Well, as we all know, AI is no longer optional. The question is not if we use it, but how.
Teams I see thriving are not resisting AI. They’re using it strategically:
to remove legwork
to increase speed
to test ideas
to support learning
Not to replace thinking.
AI as leverage is powerful. AI as a substitute for judgment is very risky.
Trust Ambiguity
In the article, Edmondson talks about how AI can create “trust ambiguity.”
When AI produces confident but incorrect information, teams don’t just lose trust in the output. They begin to lose trust in their own judgment.
She also describes how research shows that sustained AI use can undermine professionals’ confidence in challenging AI recommendations even when they have the expertise to do so.
I dont see this as a technology problem. This is a leadership systems problem.
Because when people hesitate to question, ambiguity increases. And ambiguity erodes psychological safety, which ultimately slows growth and decision velocity.
When Clarity Fades, So Does Trust
In disruptive times, operating without clarity looks like this:
strategy becomes blurred
communication fragments
information gets filtered on its way up
decisions are made on partial or distorted data
Now add AI into that dynamic.
Wrong conclusions move faster.
Unchallenged assumptions scale.
Filtered information becomes amplified.
AI doesn’t suffer consequences for being wrong.
People do.
And when teams over-rely on AI outputs without understanding the quality of the data or the assumptions behind it, critical thinking quietly fades. And thats the real risk.
Not automation, detachment.
AI cannot:
read informal cues
sense tension in a room
adjust tone based on team dynamics
build trust through informal conversations
It cannot carry the relational glue that makes teams effective.
So we need to protect human interaction intentionally, not accidentally.
The Real Question
Maybe the conversation isn’t about AI overuse.
Maybe it’s about leadership maturity.
Are our leaders:
investing in data quality?
teaching teams how to challenge AI?
reinforcing clarity of decision rights?
protecting psychological safety?
Or are we reaching for tools faster than we are strengthening the systems that hold them?
Yes, AI increases reach, but without strong leadership architecture, increased reach just amplifies noise.
My Reflection
I’m not anti-AI. Quite the opposite, I use multiple AI tools daily, often cross-referencing and challenging outputs.
Used well, AI expands capability. But capability without clarity creates fragility.
The more powerful our tools become, the stronger our leadership systems must be.
Otherwise, we risk mistaking confident automation for sound judgment and calling that progress.




