To Think or Not to Think: What is AI Doing to our Critical Thinking?
Is AI a Telescope or a Calculator?
In 2011, when the number of monthly unique visitors to Google surpassed one billion for the first time, Science magazine published the result of a research showing that, people who often use internet search engines (like Google) tend to remember less of the information they find online, compared to people who learn the same information through offline methods (like reading a book, attending a lecture, or having a conversation).
Go back a little further, and you’ll hear teachers worrying that calculators would ruin arithmetic skills. Going back just a little bit more, say 2500 years, you will find that Socrates thought, writing makes people's memories weak! (He’d definitely have a stroke if he saw us now.)
So is AI damaging our cognitive skills like critical thinking or improving them? And what should we do about it?
First, what is critical thinking? and why does it matter at work?
Critical thinking is the art of analyzing information objectively, evaluating arguments rigorously, and forming independent judgments, it’s the foundation of sound decision-making in a noisy world. Those who score high on scientific measures of critical thinking tend to excel in their professions and, perhaps more importantly, are far less likely to fall for clickbait, cults, or cleverly disguised nonsense.
For example,
When you evaluate job offers by comparing salary, benefits, commute, and growth potential, you're thinking critically.
When you troubleshoot why your computer is running slow and test different solutions, you're thinking critically.
When you decide what to eat based on nutrition, budget, and time, you guessed it, critical thinking.
But here’s the thing: AI can do all of those tasks for you and more, quite efficiently. So should we allow AI to take over the steering wheel of our critical thinking abilities or should we keep training the muscle?
The Bad (Yes, we’re starting here):
The truth is, getting instant answers or ready-made solutions from ChatGPT makes life a whole lot easier which is very tempting. But this convenience leads to a condition where we start skipping the deeper analysis that true critical thinking requires. Psychologists call this cognitive offloading which is basically outsourcing our brainpower to tech instead of doing the deep thinking ourselves.
Cognitive offloading is still a relatively new concept, especially in the context of AI, but it’s worth paying attention to, because critical thinking is the backbone of good decisions, strong education, and professional competence.
Take Michael Gerlich, Head of Strategic Foresight at SBS Swiss Business School. In a recent study, he found a strong negative correlation between frequent AI use and critical thinking skills. In plain terms it means that, the more participants leaned on AI to answer questions, the worse they did on critical thinking assessments. The biggest drop-offs showed up in younger professionals while older participants (and folks with more education) were more resilient, possibly because they had those skills shaped before AI started auto-completing their thoughts.
It shows that overreliance on AI can erode the very skills that workplaces value in human professionals like the ability to question, analyze, and solve problems independently.
Then there’s this study from Microsoft and Carnegie Mellon in which many workers admitted that when they use AI for writing or research, the focus is more on fact-checking or lightly editing the AI’s output than on formulating their own original approach​.
While double-checking AI's work seems like a form of critical thinking but the study shows the way it has been done, is quite shallow: users verify surface details (or worse, assume the AI is correct) instead of deeply investigating the content or generating ideas themselves​. In fact, only about a third of users said they consistently applied real critical thoughts, while the others assumed AI’s judgment​ is just correct.
The good:
Despite the anxieties, AI tools do have an upside, and when used wisely, they can actually boost human critical thinking.
The same Microsoft research also found that, many of the 319 knowledge workers surveyed, said that generative AI significantly reduced the burden for tedious stuff like info gathering and initial drafts. Instead of spending hours digging through sources or staring at a blank page, people could shift their focus to interpreting results and refining the AI’s output to match the tone, context, or quality they needed. In other words, the AI became a kind of cognitive assistant which is not replacing your brain, but clearing its clutter.
This shift from doing the task to overseeing it isn’t new. It echoes what happened with other tech advances. Calculators didn’t kill math, they freed mathematicians to focus on the hard stuff and search engines didn’t kill curiosity, they just sped up the hunt for facts.
Similarly, tools like ChatGPT can handle the mental busywork so that we can spend more time on thinking critically about what to do with all that information once we have it.
There is no ugly, there is hope
So where does this leave us?
1. AI’s impact isn’t automatic, it depends on how we use it.
The correct use of AI can help to increase critical thinking skills. AI tools like the popular large language models can be used for critical discussions and not only as an instrument that replaces one’s own work or thinking.
Generative AI is fantastic for brainstorming, showing prompters choices and ideas they might not have considered. AI can also foster critical thinking when users hone the questions they are asking to achieve desired outcomes. For example: trying to get an AI image generator to produce something that matches what you’re imagining. You have to be very clear and descriptive.
2. The level of Self confidence while working with AI matters.
Based on Microsoft research results, The more confidence users have in GenAI's ability to perform a task, the less likely they are to engage in critical thinking. This was backed both quantitatively and qualitatively. But Interestingly, the opposite was true for self-confidence; People who felt more confident in their own ability to perform and evaluate the task were more likely to critically assess AI outputs.
3. We need more longitudinal research.
There’s a significant lack of longitudinal research on AI’s impact on critical thinking simply because AI tools like ChatGPT are so new. That means we don’t yet know how months or years of regular AI use might reshape people’s ability to reason independently, solve complex problems, or sustain focus without assistance. Ultimately, the choice belongs to each of us, and to the workflows, cultures, and tools we create.
Now what practical solutions can leaders focus on in order to protect the most valuable asset they have in their talent. Here are my suggestions:
Suggestions for professionals:
Learn:
Continue building your own body of knowledge and skills, even if it is something that a computer could do for you. That will give you the ability to make connections and form new ideas that go beyond what even AI can generate.Â
Evaluate:
Apply healthy skepticism, even outside of interactions with a tool like ChatGPT, whether to a news article, a YouTube video, an interesting newsletter, a corporate strategy document, or any other media. Look for additional sources for claims you see, particularly ones that seem too good to be true.Â
Reflect:
One powerful habit is meta-prompting: instead of running with the first answer, ask the AI to critique its own response, offer alternatives, or generate counterarguments. This doesn’t just improve the output, it keeps you engaged in the reasoning process.
Also, resist the temptation to treat the first response as final. It’s often just a rough draft. In an AI-driven world, the professionals who stay sharp are those who make a habit of questioning, refining, and challenging not just accepting.
Suggestions for leaders:
1. Don’t Confuse Speed with Skill
Leaders should recognize that while GenAI increases efficiency, it can reduce the depth of thinking. Workers may skip essential reflection steps if they're confident in AI outputs. Managers need to balance speed with thoughtfulness. Encourage teams to document their decision rationale, not just the AI output. Build a culture that values how an answer was reached, not just how fast it was generated.
2. Foster a Reflective Culture
Workers who tend to reflect on processes and outputs are more likely to think critically with AI tools. Leaders can shape environments that reward reflection, iteration, and learning from AI interactions. Run workshops that focus on advanced prompting, output critique, and scenario-based exercises where employees must judge GenAI’s performance.
3. Train Beyond Tools: Build Cognitive Skills
The Microsoft paper emphasizes the importance of self-confidence in fostering critical thinking. Leaders should invest not only in tool training but also in their leadership skills to be the source of self-confidence in their team not self-doubt.
4. Design Workflow Friction Strategically
Friction isn’t always bad, design interventions like adding pauses or requiring edits before submission to promote critical thinking. Avoid automating entire workflows with no human-in-the-loop. Instead, implement deliberate pause points like., requiring review steps for AI-generated reports, customer emails, or code.
Think of AI not as a calculator, but as a telescope: it extends your vision, but only you can interpret what you see.
Well written & quite resonated. Also, i second that all these points you mentioned 🥹
@Sonia Sarrami I also recently published my thoughts on similar topic in my tech newsletter (TechParadox.dev), would love for you to check it out & know your thoughts.
https://open.substack.com/pub/techparadox/p/humans-are-hooked-machines-are-learning?utm_campaign=post&utm_medium=web