A new chapter unfolds—one where the line between creator and creation is redrawn, and the very essence of consciousness is challenged.
The above sentence sounds pretty good doesn’t it?
What if it were generated using a new technological trend in high school and college called OpenAI/ChatGPT?
It was.
How would anyone know?
The use of AI has become increasingly common here and across many campuses nationwide, but what has become an increasingly popular trend for some has become a disease for others.
Nationally, according to the article “Chat GPT Cheating Statistics & Impact on Education”, 43% of college students have used ChatGPT or similar AI tools.
“Of these, 89% used it for homework, 53% for essays, and 48% for at-home tests,” the article read. “Twenty-six percent of K-12 teachers have caught a student cheating with ChatGPT,” per a Daily Mail survey.
The article didn’t say whether the AI used was for legitimate or nefarious purposes. Nonetheless, instances of its use have increased on campus.
“There has been an increase in reports of the use of AI and an increase in reports to Ferrum’s Honor Board,” said Abigail Jamison, psychology professor and chair of the Honor Board.
On the flip side, many say AI can be used for good things, and Jamison understands there’s another side of the proverbial coin.
“I think AI is a two-edged sword. I do see it sticking around and being useful in many ways, but unless a professor clearly indicates the use of AI is allowed or appropriate for an assignment, it shouldn’t be used there,” Jamison said.
But what if someone uses it in the wrong way?
“If you use AI inappropriately, your professor should speak with you about it, discuss any penalties, and then refer it to the Honor Board if necessary. The Honor Board will keep a record of the infraction while you are a student; however, you may appeal those penalties with the Honor Board,” Jamison said.
Many colleges have adopted AI detection software that scans student work for AI generation. One popular service to flag AI-generated papers is called Copyleaks, which is used here.
“It is an AI-based company that integrates directly with Brightspace, so students can submit their papers on Brightspace, and Copyleaks runs a scan,” Director of Learning Ashley Williams said.
AI seems to be on track to keep growing, and Josh Jordan, senior, is someone climbing aboard the AI train.
“I like how it helps come up with ideas for certain things that I’m working on,” Jordan said.
Professors fear students might use it to take the easy way out.
“It definitely is something that can be used to make things easier and lets people not have to think hard about things,” Jordan said.
Religion Professor Eric Vanden Eykel has his own ideas about the technology.
“It’s very new still. It became a thing less than a year go. So my opinion has gone back and forth. I think it’s complicated, specifically because it’s new, and we really don’t know how to deal with it yet,” Vanden Eykel said.
While some view AI as a good thing, others are leery it will be misused and exploited.
“It can be a useful tool but also has the capacity to be abused,” Vanden Eykel said. “Last semester, I had 15 students use it to generate an essay. I wasn’t surprised because it was completely new, and everyone was trying to figure out how to deal with it. This semester, I only have had one that I know of.”
Schools have made stricter rules about the use of AI and Chat GPT. Each professor might be different, though.
“I started out this semester with a more clear guideline of ‘Please don’t use this and here’s why,’” Vanden Eykel said.
On the other hand, some professors think it can be a positive tool, and some may even use it themselves.
“I think that one of the things that is potentially useful about it is that it can help people brainstorm ideas,” Vanden Eykel said. “I think it can be extremely helpful in editing. I have used it actually for certain things when my friends send me things to read over.”
The are a myriad of ways to use the technology, and Vanden Eykel has carved out his own.
“One of my first steps is to run their texts through and ask it to find some typos. It does in 30 seconds when I would spend hours doing, then I could read through it without the typos. One of the most extremely helpful things is technical writing. For example, I manage a social media account for a local farm. Three quarters of what I post is generated by AI. That’s because what I post is recipes and how-to-freeze recipes,” he said.
People tend to be fearful of new things without much knowledge, but Eykel feels differently.
“With technology that’s so new, I think we need to be open to it. It might end up being useful in ways that we can’t even see yet,” he said.
How do some professors know when it’s AI generated without the help of Copyleaks?
“It can write elegantly, which is part of the way you can detect when someone is using it on an assignment because often it writes in vague but elaborate language that’s not even close to how a person would speak, let alone write,” Communication Professor Karl Roeper said.
Some people may be on the side of AI use, but Roeper has a more fundamental idea.
“Why am I stuck on people using their brain? I think that’s one of the best things we got. If we rely on AI too much, our skills would atrophy,” Roeper said.