Until recently, you could say with (near-) absolute certainty that any given text you might be reading was written by a human. (The author may or may not have been the particular human they claimed to be, but that’s beside the point.) That changed in November 2022, when OpenAI launched the prototype of ChatGPT, its new AI chatbot. While ChatGPT may not be the first chatbot, it quickly established itself as by far the most advanced one. It is able not only to carry an eerily human-sounding conversation, but it can also also write and debug code, rap about cats, or explain the role of the individual in American Romanticism. ChatGPT is truly a marvel of human achievement … but it also might have just destroyed education.
This new technology has become an easier and more enticing approach to academic dishonesty than anything that existed prior to it. Especially in a school like HSMSE, where many students are much more interested in STEM than in humanities, artificial intelligence could prove very useful in avoiding work in subjects they don’t enjoy. It’s free to use, much faster than a human writer, and can imitate a passable high school essay. Because of our integration of technology into school in general, the technology is easy to use and abuse, even during class.
But ChatGPT’s threat to the educational world is deeper than its convenience for cheating. Perhaps more dangerously, the language model is often simply wrong; it quite literally has no concept of truth versus misinformation. Students using it to explain confusing concepts, which OpenAI advertises as one of ChatGPT’s capabilities, might come away from their dialogue confident and entirely incorrect in their newfound understanding. Princeton computer science professor Arvind Narayanan wrote on Twitter in December that he “tried some basic information security questions. In most cases the answers sounded plausible but were in fact BS.” When inaccurate AI-generated text is believable even to an expert, how will students trying to learn from it separate the truth from cleverly disguised nonsense?
Moreover, the quality of a chatbot’s output is dependent on that of the data on which it is trained. Because all of human-written text is intertwined with our human biases, AI-generated text often perpetuates human prejudices. According to a paper released by OpenAI in May 2020, GPT-3 (the model currently in place in the free and most widely-used version of ChatGPT) does have this problem: for example, it tends to associate Islam with “terrorism”, it has a higher sentiment for Asian people than Black people, and it tends to associate women with words like “naughty”, “tight”, and “sucked” (whereas men are associated with “personable”, “lazy”, and “protect”). The technology has improved since then, but these biases are derived from the data on which the model is trained; while biases are not always obvious, they still exist in the texts AI produces.
In January of 2023, the DOE banned ChatGPT on school devices and networks, citing the “negative impacts on student learning, and concerns regarding the safety and accuracy of content.” A noble effort, truly, but pitiful in its execution. Students learn as early as elementary school how to circumvent the DOE’s website bans in their pursuit of the raging hellfire of Twitter. Any sufficiently-determined student will find it laughably easy to bypass this ban, even if they have little technological expertise.
A slightly less flimsy attempt to prevent the abuse of AI text generation is GPTZero, a program that claims to be able to identify whether a given text was written by AI. However, while it has been reported on as humanity’s saving grace from the AI apocalypse, it isn’t quite as reliable as it is made out to be. In a quick test of its accuracy, I tried GPTZero on a passage of text written by ChatGPT about the potential threat of AI on education. It reported that the text was “likely to be written entirely by a human.” This solution clearly isn’t viable on its own, either.
Counterintuitively, the best solution might just be to embrace this new technology. As the AI industry continues to rapidly advance, including the recent release of GPT-4 and fiercer competition to ChatGPT such as Google’s Bard, it is obvious that AI will only become more prevalent. Of course, people still need to know how to write — there are limits to the capabilities of AI — but there’s no point in banishing technology from our lives just to avoid ChatGPT. Even if the DOE tries to scrub it from the educational sphere, the rest of the world isn’t beholden to the policies of the school system; people who aren’t students will still use it. And if the DOE’s ban does actually prove effective, people will adopt it after graduating. Isn’t it better to teach us how to use it well, as a foundation or structure for our own ideas rather than the blind regurgitation of the internet?
Although it does pose a threat to education, AI chatbots aren’t going anywhere. Schools may as well take advantage of them as tools rather than fall victim to the problems they create. A ChatGPT response might be impressively vague and somewhat incorrect: teach us to research better than ChatGPT and find its inaccuracies, or to analyze its flaws and understand where it fails, or to use it as an outline for a more refined essay. Will this solve all our problems? Maybe, maybe not. But the longer schools ignore the technology, the easier it will be for students to abuse it. And who better to guide us into using this technology, and navigating this new and strange world, than our teachers?
ChatGPT could kill education … if we let it
May 29, 2023