Confidence Is Not Competence: Why AI Needs Critical Thinking
In business, confidence can often be mistaken for competence.
The loudest voice in the room — the one with the boldest claims, the slickest slides, the fastest answers — tends to get heard. But confidence without context can be dangerous. And nowhere is this more apparent than in the new wave of AI tools entering the boardroom.
Large Language Models (LLMs), like ChatGPT and Gemini, are trained to sound sure of themselves. As a recent New York Times article revealed, they’re getting even more confident as reasoning models evolve — yet not necessarily more accurate.
They hallucinate plausible-sounding nonsense, wrapped in perfect grammar and corporate fluency.
And that’s exactly the trap: they sound like they know.
But sounding like you know is not the same as knowing.
Just like that executive who joins a new company and confidently pushes an old playbook — regardless of whether it fits the nuances of the new business. Just like the management consultant who sells strategy in generalities, without fully understanding the specific challenges of each client. It’s not just an AI problem; it’s also a human one.
Socrates, the godfather of critical thinking, would’ve seen this coming.
He believed the appearance of knowledge was more dangerous than ignorance itself — because it shuts down curiosity. The truly wise, he argued, are those who recognise the limits of what they know.
I saw this play out recently when a junior analyst I was mentoring asked for help with a Power BI challenge. He was convinced the solution he needed wasn’t possible — something he said had been confirmed by ChatGPT. I had a hunch it was possible based on past experience, but I couldn’t recall the exact approach off the top of my head. So, we started asking the right questions — pressing the chatbot a little more thoughtfully, double-checking things the old-fashioned way with Google, and eventually, we uncovered the correct solution. What struck me was how quickly the confidence of ChatGPT’s initial answer had shut down any further exploration. It wasn’t the lack of knowledge that was the issue — it was the illusion that the answer had already been found.
This kind of critical inquiry matters more than ever in a world of AI-generated summaries and synthetic insights. Because when we outsource our thinking to tools that can’t explain their logic — or check their assumptions — we risk making decisions that are fast, confident, and completely wrong.
Whether it’s a new executive, a flashy dashboard, or a Copilot AI — confidence should always be tested.
Critical thinking is no longer just a virtue.
It’s a competitive advantage.