People Are Getting Dumber, and AI Is Getting Smarter
An unfiltered perspective on the shift in cognitive power
Welcome back to Tech Trendsetters! Today's episode hits different because we're talking about something that affects every single one of us – including me.
Here's the thing: humanity is experiencing a new paradox – our tools are advancing rapidly, but instead of making us smarter, they're making us dumber (or should I say – intellectually passive). AI, particularly large reasoning models, is now outperforming humans in tasks that once defined intelligence – coding, reasoning, problem-solving, while people are losing cognitive effort and critical thinking skills.
Simply speaking, people don’t want to think anymore. The modern workforce blindly accepts AI-generated outputs without questioning them, and schools are adapting not by teaching better thinking skills, but by making AI tools a crutch. We are no longer the "thinking species" – we’re effectively outsourcing cognition (the most valuable skill we could ever have) to machines.
How AI Is Thinking Harder While We Think Less
Let me share something that should terrify you: while we're busy celebrating AI's achievements, we're witnessing the quiet death of human cognitive effort. The evidence? It's right in front of us.
Look at what's happening in the programming world. OpenAI's latest model, o3, isn't just solving coding problems – it's achieving results in the 99.8th percentile on Codeforces.
If you're not familiar with CodeForces, think of it as the Olympics of programming – a competitive platform where the world's best programmers solve complex algorithmic problems under time pressure.
Strictly speaking, this means that there are less than 200 people left in the world who can program better than this model.
We're talking about performance levels that put it above virtually every human programmer on the planet. And here's the craziest part: it's not just mimicking human solutions – it's developing its own reasoning strategies, often more sophisticated than what humans can devise.
The Critical Thinking Crisis
But that's not even the most concerning part. Another research shows that when people have access to these powerful AI tools, they're less likely to engage in critical thinking. I had suspected this myself for a long time, but now, at least, we have one study to prove that I was not alone.
Knowledge workers are increasingly falling into what I call the "passive acceptance trap."
Here's what's happening: When people have higher confidence in AI, they actually engage in less critical thinking. It's not just about blindly accepting wrong answers – though that happens too. The bigger issue is what researchers call "baseline aspirational threshold" thinking. We're accepting mediocre solutions simply because they meet minimum requirements, not because they're the best we could achieve.
What I’m trying to say? We're not just using AI as a tool; we're surrendering our cognitive processes to it. The very skills that made us effective knowledge workers – analysis, synthesis, evaluation – are now being outsourced to machines.
Why People Are Getting Dumber
The world isn't just watching AI get smarter – it's watching itself get dumber. And trust me, I see this happening every day in the tech industry.
AI isn't just automating simple tasks – it's automating thinking itself. What terrifies me most? Most people don't even notice it's happening.
The Decline of Critical Thinking
Let me share something disturbing from recent research “The Impact of Generative AI on Critical Thinking”. This comprehensive study of 319 knowledge workers revealed a pattern that confirms my worst fears: people are actively choosing not to think critically when using AI. They're not just being lazy – they're systematically abandoning the very cognitive processes that make us human.
Main outcome: The more you trust AI, the less you think.
It's not just correlation – it's a direct relationship. Knowledge workers often "neglect critical thinking when they perceive it as outside their job scope". Just let that sink in… They're literally deciding that thinking isn't part of their job anymore.
What's even more concerning is what researchers call "overreliance" – users accepting incorrect recommendations without question. We're not just talking about minor mistakes. We're talking about professionals in high-stakes fields blindly accepting AI outputs that could be fundamentally wrong.
Users are becoming intellectual middlemen. Remember when humans used to be problem solvers? Now we're becoming "AI output validators" – a quality control agents for machine-generated work.
A quote I find particularly useful here:
While critical thinking may not be necessary for low-stakes tasks, it is risky for users to only apply critical thinking in high-stakes situations. Without regular practice in common and/or low-stakes scenarios, cognitive abilities can deteriorate over time, and thus create risks if high-stakes scenarios are the only opportunities available for exercising such abilities. This phenomenon is well-documented, as in Bainbridge’s “Ironies of Automation”
The Illusion of Intelligence – Are We Just Pretending to Be Smart?
Let’s talk about a hard truth: there’s a growing tendency for people to stop truly thinking – they’re just pretending to be smart. AI is doing the heavy lifting while humans sit back, approve outputs, and call it “productivity”.
The GPT-ification of Work – Humans as AI’s Quality Control Team
Companies love to brag about how AI is revolutionizing the workplace. But let’s be honest: AI isn’t just making work faster; it’s shifting the entire nature of human labor. We’re no longer “creators” – we’re just reviewing what AI has already done for us.
Let’s take a look at some more examples:
AI-generated code is already writing entire functions and debugging its own errors. Developers aren’t problem-solving anymore – they’re just clicking “accept” on GitHub Copilot suggestions.
AI-written legal documents are making lawyers more “efficient” – but what does that really mean? It means they’re not spending time constructing arguments, they’re just proofreading AI-generated contracts and filings.
Business reports are no longer compiled by analysts; AI extracts trends and suggests conclusions, and humans just glance at the summary.
We’ve entered the rubber-stamp era of intelligence, where humans act as a final approval step for machine reasoning. This isn’t just speculation, I will provide just one quote from the study:
We found that GenAI tools shift the effort of critical thinking in three distinct ways: for Knowledge and Comprehension, the effort shifts from information gathering to information verification; for Application, effort shifts from problem-solving to AI response integration; and for Analysis, Synthesis, and Evaluation, effort shifts from task execution to task stewardship
Read that again. Humans aren’t solving problems anymore. They’re just verifying AI’s solutions. But here’s another outcome out of this – when people trust AI outputs too much, they stop verifying it entirely.
Which brings me to the most terrifying part…
If AI Knows More, Why Should We Learn?
For centuries, humans have had to struggle to gain expertise in fields like mathematics, medicine, law, and engineering. That struggle forced us to develop deep knowledge, refine reasoning skills, and innovate solutions. But now, when AI can do all of this without human input, why would anyone go through that struggle?
Now, AI is removing the need for struggle. It’s scaffolding complex tasks, breaking them down into digestible, ready-made solutions, and subtly reshaping how we learn, or rather, how little we have to learn.
“The Impact of Generative AI on Critical Thinking” study makes this painfully clear:
Participants reported reduced effort when GenAI tools helped to scaffold complicated tasks and information.
The moment AI steps in to assist, people put in less effort. Not because they can’t do the task, but because AI makes it so easy that they no longer see a reason to engage fully.
In my opinion, this is how deep expertise dies – not with a dramatic collapse, but with small, everyday choices to let AI do the thinking.
And we’re already seeing it happen.
Humans aren’t engaging with problems anymore; they’re reacting to AI-generated solutions. And the more this happens, the more we lose the ability to solve problems without AI. Humans are simply becoming incapable of functioning without it.
Another quote from the study:
With GenAI, knowledge workers also shift from task execution to oversight, requiring them to guide and monitor AI to produce high-quality outputs – a role we describe as “stewardship”
I don't know how you feel about this, but for me, working as an AI steward doesn't sound like a bright and comfortable life.
What happens when the next generation grows up never having had to struggle with a difficult math problem, never having had to write a paper without AI assistance, never having had to deeply analyze anything on their own?
I’ll tell you what happens:
They won’t just lack expertise – they won’t even know how to develop expertise in the first place;
They’ll default to AI for everything, because they’ve never had to rely on their own reasoning;
They’ll trust AI without question, because they have no baseline of independent knowledge to challenge it;
AI-Augmented Intelligence
For much of this discussion, I’ve painted a dark picture: people are thinking less, relying on AI more, and outsourcing cognitive effort to machines. But here’s the paradox – while AI may be eroding certain intellectual skills, it’s also creating new forms of human intelligence.
If we look on it from a positive side, AI isn’t just replacing human thought – it’s reshaping it. And if we play this right, we might end up smarter, not dumber.
AI as a Catalyst for Complex Thinking
People love to say AI makes us lazy, and for the most part, I agree. But sometimes, it actually forces us to think harder.
“The Impact of Generative AI on Critical Thinking” study also found that AI can push users to analyze arguments more critically, cross-check sources, and refine their reasoning process. That’s right – when used the right way, AI isn’t just spoon-feeding us answers, it’s challenging us.
I think this is exactly the point that we should embrace. When it highlights bias in data, we’re forced to reevaluate our assumptions. When it generates unexpected ideas, we have to interpret them, refine them, and apply them in ways AI itself can’t.
AI is neither good nor bad for thinking – it simply amplifies whatever mindset we bring to it. If we seek easy answers, it makes us passive. But if we push back, question, and refine, it can actually make us sharper.
AI Enhancing Human Creativity and Insight
And then there’s creativity. The one thing AI is supposed to be terrible at, right? Well, not exactly, let me bring the bright side out of it too.
Some participants actually found AI useful for structuring complex thoughts and generating new ideas. They weren’t just mindlessly copying AI-generated content – they were using it as a launchpad for deeper insights.
I’m not the first who’ve witnessed that AI may spark new concepts. Scientists are testing hypotheses faster by running AI-assisted simulations. Even business strategists are using AI to challenge their assumptions and find solutions they would have overlooked.
If you think this way, AI isn’t killing creativity – it’s fueling it. As I always say, the problem isn’t the technology itself – it’s how we choose to engage with it.
The Real Challenge: Staying in Control
So where does that leave us? None of this means we’re safe. AI can make us smarter, but that’s not what’s happening for most people.
The reality is, critical thinking requires effort, and most people would rather take the easy way out. AI gives them that option. It makes thinking feel unnecessary. And when people don’t have to think, they don’t. Simple as that.
Yes, AI can encourage deeper reasoning – but only for those who actually care to challenge it. The rest will just trust whatever it spits out. And that’s the problem – especially if you’ve been reading our earlier episodes dedicated to the problem of aligning AI’s values and moral with those of humans.
We’re not on a path toward AI-enhanced intelligence. We’re on a path where most people won’t think at all. AI is shaping a world where reasoning is optional, and the majority will gladly opt out.
So the real question isn’t whether AI can make us smarter. It’s whether we, as a society, will let it make us dumber instead.
Thanks for joining me today – stay sharp, trust your common sense, and keep your critical thinking engaged. See you next time!
🔎 Explore more: