What AI won't change about consulting
Every hype cycle misses the same truth: technology shifts the tools, not the terms.
When I hear consultants talk about AI, I feel a strange déjà vu: a flicker of memory from the early 2000s.
The dot-com mania was in full swing, with consultants and entrepreneurs swearing the old world was about to vanish. Everyone believed the future was being rewritten in code: faster analysis, cheaper slides, smarter models. We spoke in buzzwords, convinced we were surfing history.
Then, as always, the tide receded, and the future arrived a bit slower than we expected.
That memory haunts me now every time I hear the same breathless tone around AI, the same conviction that “this time it’s different.”
It never is.
Jeff Bezos said something in an interview at a business forum reflecting on Amazon’s early strategy, that has stayed with me ever since.
He explained that people always asked him what would change in the next decade, but rarely what would not change.
His answer revealed a deep truth about time and focus:
“You can build a business strategy around the things that are stable in time.”
For Amazon, that meant anchoring around the simple, eternal truths of customer behavior: people would never want slower delivery, higher prices, or a smaller selection.
It was a meditation on permanence disguised as business advice.
And that’s why it resonates with me now: because in every period of technological upheaval, future success comes sometimes from predicting what will shift, but always from understanding what will stay still.
I have been thinking about that line a lot lately.
AI has turned consulting into another hype cycle… a familiar one. ERP, outsourcing, digital, agile, blockchain, now AI. The names change, the pitch decks update, the fundamentals remain rooted.
What matters is not predicting every disruption but seeing what endures beneath them.
The constants I see
I still remember my European years commuting between London and Milan every single week.
I would wake up on Monday morning at 4am for my 6am flight out, and take the Friday 8pm flight back. Same suitcase, same coffee at the British Airways lounge at Heathrow, same rush to Piazza del Calendario in the heart of Milan.
It was exhausting but also satisfying.
I learned that clients never really cared how advanced my frameworks were. They cared if they could trust me in meetings with their bosses. They cared if I could help them see the problem clearly, define a credible path to a solution, and then follow through.
That has never changed. And I doubt it will going forward.
Clients will never ask for advice they can’t trust. They will never say, “Please, confuse me with more options.” They will never prefer a consultant who can crunch the data but can’t read the room.
Charlie Munger had a mental trick for this: invert, always invert.
Instead of asking how AI will change consulting, ask:
If AI rewired everything, what would still survive?
AI won’t replace the credibility that gets you invited into the meeting in the first place.
It won’t walk into a hostile steering committee and de-escalate politics.
It won’t sense that the same framing lands one way in London and another in Hong Kong.
It may get faster, more articulate, and more context-aware, but it will never carry moral authority, political intuition, or lived experience.
What is the human moat, then?
Consulting is not just an information business: it is the governance of decisions.
Decision calculus is therefore not taste or style, but the hard work of specifying the objective function, choosing the loss you are willing to tolerate, and sequencing interventions that can survive an organization’s immune system.
Models compute. Humans decide what “good” means.
If there is a human moat, let’s name it precisely.
Is it “interpretation”? Maybe, but I think it’s more arbitration among conflicting incentives and constraints.
In a bank, for example, the “right” answer must thread capital requirements, union agreements, brand risk, regulator mood, and the unwritten rules of status. None of that lives cleanly in your dataset.
AI doesn’t replace consultants, but it certainly removes alibis. It collapses the cost of manufacturing plausible plans, so scarcity shifts from answers to alignment. The edge lies in killing 90% of the options for reasons no model can see, and then standing behind the remaining 10% with your name on the line.
Here’s an uncomfortable truth: when you “explain the model,” you are manufacturing shared acceptability so that accountable humans can sign their names to a decision.
That’s the layer AI cannot touch: setting decision rights, shaping incentives, designing irreversibility carefully, and distributing blame and credit in ways that keep the system stable while it changes.
So, the trillion-dollar question: what remains scarce when compute is cheap?
At least 4 things, and I am going to name them.
Trust. Trust as a track record of exposure means you made calls under uncertainty, were transparent about risks, and owned the consequences. Trust is a lagging indicator. AI can draft arguments, but it can’t take risk; therefore, it can’t accumulate trust.
Problem framing. As the marginal cost of generating options approaches zero, the marginal value of pruning them rises sharply. Framing is the discipline of naming the objective, constraints, and acceptable loss so that the vast majority of the solution space is eliminated before anyone opens a slide. Models enumerate; framing economizes attention.
Stakeholder alignment. Implementation is distributional: every decision allocates pain, status, and budget. That’s why most “right” answers fail in the wild. Alignment is a credible deal among actors with conflicting interests, and no model optimizes for that equilibrium by itself.
Accountability and ownership. Advice without ownership is performative. Someone must specify decision rights, design irreversibility carefully, and set escape hatches when reality pushes back (and reality is right, but only 100% of the times…).
AI can accelerate thinking, but it can’t accept liability, navigate status games, or translate local constraints into credible commitments. It can propose, but not persuade. It can simulate empathy, but not earn it.
The job, then, is to use AI as a lever to reclaim time and reinvest it in the only scarce assets that make us go exponential: trust, framing, alignment, and ownership.
The tools will evolve. The buzzwords will change. But the essence of consulting (ie, the craft of helping humans make consequential choices under uncertainty) will not.
And that is why I believe this profession will still matter.
As usual, if you enjoy reading Consulting Intel, please do me a favor: spread the word and share this post.
👋
👀 Links of interest
A few corners of the internet you may find interesting:
As the inspiration for this post today was a quote from Jeff Bezos, I thought it’d be good for you to have a look at this post by
from :Have you looked into the Leaders Toolkit? It is a deck of 52 tools, frameworks and mental models to make you a better leader (use code CONSULTANT10 for 10% off);
The Consulting Intel private Discord group with 250+ global members is where consultants meet to discuss and support each other (it’s free).




I can always count on you for excellent insight but this is one of your very best! Grazie!
Great way to think about the future of consulting and professional services in general. Like your four key points at the end - in practice accountability and ownership is a slippery eel for consultants. Often the client wants a consultant precisely to obscure this aspect of the solution.