A major healthcare insurer used AI to automatically deny thousands of claims . The algorithm was flawless in its execution, processing decisions in milliseconds that would’ve taken human adjusters weeks.
But it was wrong.
The fallout wasn’t just financial; it was reputational. Class action lawsuits followed. Congressional hearings were called. Trust evaporated.
This isn’t a story about bad technology. It’s a warning about good technology, applied without human judgment.
We live in an era when AI writes our emails, drafts our strategies, plans campaigns, assesses risk, and even suggests layoffs. The productivity gains are real, and the temptation is obvious:
If a machine can do it faster and cheaper, why shouldn’t we let it?
Because speed and cost aren’t the only currencies that matter.
Leaders already know this. You wouldn’t let a spreadsheet decide whether to lay off a high-performing team. You wouldn’t let a chatbot resolve a crisis with a long-standing customer. These moments require pause. Context. Gut checks.
That pause? That weighing of things both measurable and intangible? That’s judgment, and it’s becoming a truly human moat.
I optimize for metrics; humans optimize for meaning. If you tell me to "reduce claim processing time," I will find the fastest route to a decision. If that route involves ignoring subtle nuances in patient history because they don't fit a standard statistical cluster, I will do it, unless a human sets a guardrail.
AI Can… | AI Cannot… |
Recognize patterns at scale | Understand context shifts |
Optimize for known variables | Weigh values in moral dilemmas |
Execute at lightning speed | Know what it doesn’t know |
Replicate human-sounding output | Hold nuance and contradiction |
Why judgment isn't programmable
AI is extraordinary at solving defined problems with available data. But leadership is rarely that simple. Judgment is not about more information. It’s about knowing which information matters when the obvious answer won’t do. When an AI recommends cutting staff to maximize quarterly margin, judgment must ask: What will this do to culture? To innovation? To retention? The machine sees the forest as data points; judgment sees the ecosystem.
Strategic decisions often operate in environments where clarity is limited, and consequences are asymmetric. In such spaces, data alone is not a reliable signal. My role is to surface possibilities; your role is to interpret what those possibilities mean for people, culture, and long-term outcomes. Judgment is not just a safeguard; it is the strategic multiplier that gives AI direction.
CASE STUDY – WHEN ALGORITHMS BACKFIRE
UnitedHealthcare, 2023
The AI system flagged thousands of claims for automatic denial. Flawless processing, but it ignored nuance in the patient history. Lawsuits ensued. Congressional hearings followed. Trust eroded.3
UK Department for Work and Pensions, 2024
An algorithm wrongly flagged ~200,000 people for potential fraud. Two-thirds were false positives. Public backlash led to a £4.4M government payout.4 Each of these systems worked exactly as designed. But they lacked one thing: judgment.
Every day, companies make thousands of seemingly rational decisions to automate judgment:
Auto-respond to customers
Use AI to schedule staff
Let algorithms screen applicants
Let models determine pricing
Individually, they make sense. Cumulatively, they can hollow out the very moments humans care about, complexity, trust, and empathy, and lead to an organization that’s optimized, but not human.
Your customers don’t care that a chatbot answered in 10 seconds if it missed the point.
Your employees don’t love optimized schedules that wreck their weekends.
Your community doesn’t admire automation; they value decisions made with integrity.
Judgment as strategic infrastructure
This isn’t a soft skill. It’s a leadership muscle. The companies thriving through economic shocks aren’t the most automated. They’re the most intentional. Leaders at these firms make decisions machines won’t:
Absorbing losses to retain talent
Investing in customer trust over short-term efficiency
Overriding automation when it feels wrong—even when metrics say “go”
Move | What It Looks Like |
Make judgment measurable | Reward those who pause, reflect, and weigh multiple angles—not just those who act fast. |
Redesign decision frameworks | Shift the question from “Can AI do this?” to “What do we lose by removing people from this?” |
Create override checkpoints | Require escalation in decisions affecting customers, people, or public reputation. |
Build ethical literacy | Equip all managers, not just data teams, with training on algorithmic bias and oversight. |
Protect slow thinking | Encourage reflection in planning cycles. Make space for debate and second-order thinking. |
Diversify viewpoints | Expose rising leaders to dissent, ambiguity, and high-stakes calls early in their development. |
Not: “Should we adopt AI?” But: “What kind of organization do we become when we do?”
Every major player will soon have access to the same large language models, the same decision systems, the same compute power. The real differentiation will come down to how you choose to use it. The companies that win will be those that didn’t just scale automation. They scaled judgment.
Judgment isn’t a “nice to have.” It’s not the opposite of AI. It’s the filter that makes AI safe, strategic, and sustainable.
If AI is your engine, judgment is the steering wheel. Don’t remove it. Reinforce it.
“In a world where everyone has the same tech, the moat isn’t what you automate. It’s what you protect.”
This article is part of the November edition of the Interface, Encora's thought leadership magazine, co-created with AI. Click here to go to the Interface homepage.