Losing Critical Thinking in the Age of “Agentic AI”
When I started my PhD at NYU in 2016, one of the most recent landmark moments in AI had taken place nearly two decades earlier, in 1997, when Deep Blue beat Kasparov at chess. Not long after I began, AlphaGo (and later AlphaZero) captivated the field and reignited mainstream interest in AI, especially in reinforcement learning. In 2017, "Attention Is All You Need" was published, laying the foundation for the transformer era, and setting in motion a wave of development that led to today: transformer layers, the GPT model family, and the widespread use of acronyms like "LLM," "RAG," and the increasingly nebulous and overused term, "Agents."
Now in 2025, I find myself deeply uneasy about the state of AI discourse, its societal implications, and how organizations are representing—or perhaps misrepresenting—these technologies. Every company claims to be building "AI-powered" solutions. Every digital workflow is marketed as "agentic." And increasingly, people seem to wear it as a badge of honor that they’ve automated away core aspects of decision making.
Take Copilot’s meeting summarizations, for example. Synthesizing and compressing information is not just a productivity skill; it is a critical thinking function. In any profession, the ability to sift through noise, identify what matters, and retain knowledge is invaluable. And yet, one of the most celebrated Copilot features is the automatic extraction of "key takeaways" from meetings. This sounds helpful, but it transfers the cognitive responsibility of prioritization and memory away from the human and onto the program. It echoes the way social media platforms filter what we see on our feed, but now it’s being applied to our own work and decision-making.
Information synthesis is like a muscle: it must be exercised constantly to maintain sharpness. What happens when an entire generation of students and young professionals grow dependent on an algorithm like Copilot before learning how to define “importance” for themselves? In the long term, it may breed a toxic dependency, an intellectual atrophy not unlike the physical decline of the humans in WALL-E, passively consuming algorithmically generated content.
Am I paranoid? Consider a research paper published by authors from Harvard Business School, Warton, and Boston Consulting Group. The authors identify two distinct patterns of AI-use: one in which it is used as a tool to augment human thinking, and one in which AI is used to delegate as much cognitive effort as possible. More disturbingly, the more powerful the AI agent was, the more decision making power was given to it by the human, until the human contribution became minimal. In this study, teams that relied heavily on agents produced far less diverse responses and ideas than groups that used agents less or not-at-all. So much for the marketing claim that LLMs are massively creativity boosters.
One key takeaway in research published by Microsoft and Carnegie Mellon supports the idea that offloading cognitive tasks to AI can actually make humans worse at them. Participants who blindly accept AI suggestions reported weaker critical thinking skills. Like the study mentioned above, the MSFT/CM study also found that higher confidence in GenAI resulted in less critical thinking among users.
Just as a mention, if you blindly took my word at my synthesis of these two studies without opening up the links yourself, congrats! You fell into the same trap that Copilot introduces with content summarization. The studies do say what I claim above, but you should always check for yourself. I grow worried when business leaders make wild claims about increased productivity without remotely addressing the already-real impact of this technology.
I’m personally worn out by the hype around agent systems. I’ve lost count of how many times I’ve read “AI Agents/Copilot is a gamechanger!” on Linkedin surrounded by rocketship emojis. And who can forget the use of the em dash “—” in a social media post, the telltale sign of AI-generated content. When we stop trying to think for ourselves, we stop growing. And we won’t fully unlock the potential of agentic systems if we keep shoehorning them into problems that remove our ability to think critically.
My analogy would be “falling asleep at the wheel.” The better our self-driving cars become, the less we feel we have to pay attention while driving. Sure, we could use that time for other valuable things like reading, working, or relaxing. But we could also decide to nap, both literally and metaphorically. I fear we may already be getting sleepy.
