Will AI help humanity think more deeply—or slowly erode our ability to think at all?
As an avid enthusiast of history and literature, and as one blessed with the ability to quickly recall events from my past or knowledge I have acquired through the years via immense reading, permit me to begin by voicing my conclusive belief that Artificial Intelligence (AI) surely has a valuable place in the world of “geekdom.” From the advent of search engines like Google Scholar, Lexus and today, ChatGPT, these modern technological advances are truly magnificent for those who love gaining knowledge in every single field of human intellectual endeavor.
Nevertheless, like many academic adherents I retain serious concerns about the potential abuses of AI, including an over reliance that not only produces occasional inaccuracies that can be utterly frustrating, but is also fraught with the danger of replacing the age old human element of discerning right from wrong!
Recently while strolling through an AI generated reel that referenced “notable Black celebrities who died in 2024.” The short video montage marked the passings of screen legend James Earl Jones and music icon Quincy Jones, both of whom died last year. But then the montage started to mess up by incorrectly mourning the 2024 passings of other notable personalities who had died in 2014. Call me a pedant and dismiss these assertions as the rantings of an aging woman who is slowly becoming a grumpy “get off my lawn” sourpuss. Notwithstanding, I assure you that it’s deeper than that because my fear is that anyone who watches that reel about celebrity deaths and knows no better may accept what was shown and said as being the gospel truth! By so doing, the user may submit that information in a discussion, debate, or Heaven forbid, in an academic paper and be DEAD wrong! Today, some of my friends who teach on the post-secondary and professional levels say that the aforementioned form of plagiarism is only getting worse, a reality that is frightening when considering that future generations of professionals may not pick up the critical thinking skills that come from reading multiple sources for themselves in order to best understand, compare, and contrast the issues germane to their work world!
Sadly, many of the users or dependents of AI are totally unaware of the reason(s) underlying its existence—formulation to assist: to process vast amounts of data, handle repetitive tasks, and provide insights faster than humanly possible. When used this way, it frees our minds for the work that only humans can do—creativity, empathy, moral judgment, and innovation. At its core, AI is a tool. History warns us—when tools become crutches, skills atrophy. The calculator didn’t destroy math, but it did reshape how math is taught. Will AI reshape how thinking itself is taught? Every time we ask AI to think for us instead of with us, we risk dulling our own mental sharpness. If we accept every answer unchallenged, if we lean on it instead of learning, we are quietly trading our curiosity for convenience. Will AI “replace” humans? No! Never! Not in the truest sense, as AI does not imagine, nor does it wrestle with meaning. However, on a cautionary note we could replace ourselves—by giving up the very habits of questioning and reflection that make us human. The remedy or solution is not fear —it’s balance. What’s more necessary than ever, is for people to have the intellectual acuity to separate fact from fiction more quickly and accurately than what a computer-generated program concludes is “truth.” Allow AI do what it was built to do: speed up, support, simplify. Let humans do what only humans can do: wonder, reason, and think. Because if we stop thinking, no technology will save us from ourselves.
Aleuta continua—- Letthe struggle continue.