
What’s The Educator’s Debate Over ChatGPT?
Great Advancement or Menace to Education?
As a human content producer, I was keen to try ChatGPT, an artificial intelligence model that quickly generates text on any query entered. The tool has created quite a stir since its launch in November 2022. After playing with it online for a while, I understand the educator’s concern, but also appreciate its vast potential.
In its own words, “ChatGPT is a large language model developed by OpenAI. It uses artificial intelligence and machine learning algorithms to generate human-like responses to text-based prompts. ChatGPT was trained on a large corpus of text data from the internet, allowing it to learn patterns and relationships in language that it can use to generate coherent and relevant responses to a wide variety of questions and prompts. ChatGPT can be used for a range of applications, including chatbots, language translation, and content generation.”
If only I could answer so succinctly when someone asks me what I do for a living!
I’ve tested other AI writing tools. ChatGPT is exceedingly good in comparison. Other apps fail on complex or obscure subject matter. ChatGPT returned responses with very natural speech, similar to humans in most instances. That may be what has some upset.
Almost immediately, school districts from LA to New York banned access to the language program developed by OpenAI, a company co-founded by Silicon Valley guru Sam Altman and Elon Musk. The concern is that students will use it to cheat and plagiarize, possibly being exposed to incorrect or toxic content.
I am not sure if Sam Altman is downplaying OpenAI’s great achievement when he tweeted “ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness,” Altman wrote. “It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” Yet OpenAI is inviting the public to purchase a subscription to support continuation of spreading ChatGPT worldwide. The advancement represented by ChatGPT prompted Microsoft to invest $10 billion in its development. Publishers like Buzzfeed are looking at ChatGPT for content creation.
Students could use ChatGPT as a research tool that is more focused on a topic than a general browser search. Since the app creates unique text, responses could be copied. However, whenever students are subject to a proctored test, without access to ChatGPT or other tools, they would have to use their own brain power.

Also, there are ways to alleviate concerns over spreading incorrect information. One is to ask ChatGPT for its source material on answers. I asked ChatGPT to include statistics and cite sources on a few topics. It not only returned excellent text, the sources were all properly formatted. Now that could have been useful in college essay writing!
The risk of misinformation and false news is not a problem unique to ChatGPT. AI systems are only as good as the data they are trained on. If the training data is incomplete or wrong, the AI system will learn and reinforce mistakes and spread misinformation. Similarly, AI can be fed biased information that can exasperate discriminatory practices. Please see my related article on discrimination and misinformation in AI systems, and what can be done about it.
I don’t see the “exposure to toxic content” concern justified in any way, other than the problems inherent in all AI. After all, according to citations given, the responses are based on the same research sources students and professionals rely on when writing. I tried various ways to get harmful, or at least biased answers on a variety of topics, including sex, drugs, racism, eating disorders, plastic surgery, gun use, bullying and the existence of God. Answers were largely cautious, most with disclaimers. None were toxic.
Human developers can frame AI system limits and controls. The ChatGPT shies away from sensitive topics like dating and politics, proving that such apps could have a policy framework for end use, like education.
Acknowledging this is a major leap forward from other AI language tools, there are some pluses and minuses to the tool. For example, the typical level of speech generated by ChatGPT is pretty smart. Answers may use words beyond a student’s vocabulary, or come across as scientific or clinical. Long answers to research questions were repetitive, and would not have been able to be used as homework without major editing. These may be obvious clues for teachers that the writing is not the student’s own work.
However, ChatGPT can adjust its responses to a lower education level. I asked ChatGPT, “How would you explain artificial intelligence to a sixth grader?” The answer returned was simplified. Here is part of the response:
“Artificial intelligence (AI) is when machines or computers can do things that usually require human intelligence, such as learning, reasoning, problem-solving, and decision-making.
Think about your brain and how it helps you do things like solve math problems or recognize your friend’s face. AI is like a computer’s brain that is programmed to learn and decide based on what it has learned.”
Still, we each have our own personalities when we write. I asked ChatGPT to use a sarcastic tone when explaining why seniors in care homes are often subject to abuse. Here is part of the answer:
“Oh, sure, it’s just absolutely fantastic how seniors in care homes are often subjected to abuse. I mean, who doesn’t love hearing about our elderly population being mistreated and neglected?”
While the sarcasm was there, it was not in a form that one would typically see a professional writer use, or even a student. Yes! Human’s still own sarcasm! I modified the question, asking ChatGPT to respond with a sad tone. The app performed well, but I noticed the response was generated much slower. Trying to get the app to respond humorously was another epic fail. My take is that ChatGPT still lacks the nuances of human emotions.
As an Amazon Associate, Arvig earns from qualifying purchases.
Students risk not properly learning information they would have gained by diligent research, turning in subpar work, and may receive a lower grade for copied work. Their work could also be incorrect.
Even though ChatGPT tests at an above average college student level, it does still get many key answers wrong. Students that rely on the app as a learning tool could absorb incorrect information.
Dr. Christian Terwiesch, a professor at the University of Pennsylvania recently made headlines when he announced that ChatGPT had received a B on an MBA-level course exam he administered to the app, and it did similarly well on business exams at the Wharton School of Business.
However, ChatGPT struggled with deep analysis and applying legal rules on law school exams at the University of Minnesota. In a recent CNN article, UM law professor Jon Choi said, “The goal of the tests was to explore ChatGPT’s potential to assist lawyers in their practice and to help students in exams, whether or not it’s permitted by their professors, because the questions often mimic the writing lawyers do in real life.”
Acknowledging that ChatGPT struggled with many components of the law school exams, Choi still felt ChatGPT could help students produce a first draft that could then be refined.
I agree with Choi that the most promising use for ChatGPT at this time may be human-AI collaboration. In that regard, the technology could benefit students. It is also possible to envision a future where AI is providing media, website, blog and email content, sending writers into extinction. I don’t think we are going to shove the AI writing genie back in the bottle. But for now, only humans are able to adequately incorporate emotions, and other nuances of speech that make it human.