Human Skills as Leverage
By Andrew Calvert, PCC
Mike’s Note: Andrew and I have been colleagues and friends for 18 years. We previously worked for AchieveGlobal and then Miller Heiman Group in Southeast Asia. Both of us have vast experience working with corporates across Asia in the areas of Sales Productivity and Leadership Development. Andrew’s blog here could be his and my summary of the stories we have collected during our training and client interactions on how human skills and attitudes are still the differentiator for a healthy corporate culture that keeps a competitive advantage even with the advent of AI. Read on and learn from Andrew’s insights.
Why Productivity Is No Longer the Differentiator
For most of my career, effort created value. If you worked harder, built more slides, analyzed more data, wrote better proposals, your odds improved.
That equation has shifted. Slides can now be generated in minutes, lengthy reports summarized instantly and ideas? Ideas can be produced faster than most teams can evaluate them. The bottleneck has moved from production to judgment.
You see when everyone has access to similar AI tools, productivity ceases to be a performance differentiator and leadership quality becomes the leverage.
Research from McKinsey Global Institute estimates that roughly one-third of work hours rely on social and emotional skills that are difficult to automate, including leadership, collaboration, negotiation and complex decision-making https://www.mckinsey.com/mgi/overview/2023-in-review/the-future-of-work-after-covid-19
Even in highly digitized environments, human capability remains structurally embedded in value creation.
What has changed is scarcity, which lives in attention, discernment, trust and the ability to integrate complexity.
Daniel Kahneman’s work on dual-process thinking is helpful here. In Thinking, Fast and Slow, he distinguishes between
fast, intuitive cognition and
slower, deliberate evaluation.
AI amplifies the fast side. It generates options, drafts answers, and surfaces patterns at speed. What it does not replace is the slower work of deciding whether those outputs are relevant, ethical, strategically aligned, or wise. (Ethics anyone?) If AI expands the supply of possible answers, leaders must strengthen their capacity to evaluate them.
I see this in many teams. More output does not automatically translate into better performance. Performance improves when teams can distinguish signal from noise and decide what deserves sustained attention.
Example: Imagine a regional sales team using AI to accelerate pipeline analysis.
Every week they generate:
Automated opportunity summaries
Risk scoring on every deal
Suggested next steps
Competitor intelligence
Pricing sensitivity scenarios
Forecast confidence indicators
The volume of insight increases dramatically. But quarter-end results don’t improve.
Why? Because the team reacts to everything.
Every risk flag triggers activity. Every AI suggestion becomes a task. Every data point is discussed. Attention fragments across twenty small optimizations.
Contrast that with a different team using the same tools.
They still generate the full data set. But in their weekly review, the leader asks three questions:
Which two deals actually determine the quarter?
What is the single assumption in each that, if wrong, collapses the deal?
Where is our confidence coming from?
Instead of chasing every signal, they focus sustained attention on the handful of leverage points that move revenue. Same tools and outputs. Dramatically different performance.
In the above example, discernment from the leader shapes what gets pursued and what gets parked. Curiosity determines the quality of the questions guiding both humans and machines. Humility, being willing to ask, not tell, reduces overconfidence when outputs appear polished but may be flawed. Connection creates the conditions where people feel safe to challenge assumptions.
Amy Edmondson’s research on psychological safety shows that teams perform better when members feel safe to speak up and question decisions https://hbr.org/2025/05/what-people-get-wrong-about-psychological-safety
In an AI-rich environment, that safety becomes even more important. Speed amplifies whatever culture already exists. If people hesitate to raise concerns, errors scale quickly.
So what changes for leaders?
The focus shifts from generating more output to improving evaluation. Instead of asking only, “What did we produce?” leaders can ask, “What assumptions are we making? Where might we be overconfident? What would change our mind?” Those questions slow decision-making just enough to protect quality.
Cal Newport argues that the ability to focus deeply is becoming increasingly rare and therefore increasingly valuable. In an AI-enabled workplace, focused evaluation and synthesis may be one of the most strategic uses of a leader’s time.
Reflection Questions
1. Where is my team confusing activity with progress? Look at your last month of work. What increased: output or outcomes? Where did volume rise without a measurable shift in performance?
2. What do I consistently reward in my team - speed or judgment? When someone is praised, is it for producing quickly or for thinking clearly? Your incentives quietly teach people what matters.
3. When was the last time someone openly challenged an AI-generated insight in my team? If you cannot recall an example, it may not be because the output was flawless. It may be because the environment does not yet make dissent feel safe.

