A recently published study from MIT brings to light the effects of relying on AI models to think critically for us
Kayla Price (she/her/they) // Contributor
The study in question split participants into three groups, and recorded each group’s brain activity while they wrote an essay. One group used the Large Language Model (LLM) ChatGPT throughout the entire writing process, the second group used a search engine to assist them in writing, and the last group had no digital assistance; only their collective brainpower was used to write the paper. An additional writing session took place in which the LLM group used only their brains to further edit their paper, and the brains-only group used LLM tools for editing.
All participants were asked to quote parts of the essay they had written, and the LLM group struggled to accurately do so. As outlined in the study: “Dependence on LLM tools appeared to have impaired long-term semantic retention and contextual memory, limiting [the] ability to reconstruct content without assistance.” Put differently, members of the LLM group were largely unable to recall details about the essay they had written, likely due to the superficial engagement with the material provided by the LLM. Although this is one of the first studies of its kind, the results are significant in the post-secondary context.
Given the well documented use of ChatGPT in universities, the AI short-cut may be undermining the premise of higher education, and creating what the study refers to as cognitive debt: “a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.” In other words, instead of critically engaging with the material to discern which information to include, offloading this task is producing more biased and superficial writing.
Still, students remain divided, and it’s unclear whether the drawbacks of AI will be enough to outweigh the allure of convenience. An Emily Carr student—who wishes to remain anonymous—described how they find LLMs helpful, stating, “I use LLMs a lot, everyday. LLMs are most useful (and safe) when they’re used within the boundaries of your own expertise.” Saba Amrei, a fourth-year CapU student has a different opinion: “I don’t think ChatGPT or AI should be allowed in school or university. The purpose [of education] is to have students do the work and learn the process.” Luke Hopkinson, a first-year student at the University of British Columbia, shared that the reason why they refuse to use AI goes beyond its unreliability and environmental impact: “AI is trained upon the history of English literature and internet content, meaning that it carries all of the inherent biases.”
Although the findings from this study are considered “preliminary” due to the limited number of participants, they offer evidence of potential long-term costs for the continuous short-term amortizing of mental effort. It alerts, “When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.”