ChatGPT and AI technology
How AI impacted USD students this semester
Anjali Dalal-Whelan / Asst. News Editor / The USD Vista
In the past months Artificial Intelligence, or AI, has become more accessible to the public than ever, making it a topic of discussion all semester. ChatGPT, a chatbot developed by Open AI, an AI research organization, first launched on Nov. 30, making this spring semester the first full semester which it was available for use by students.
Recently, many other companies started using AI. Google released their chatbot alternative “Bard” on March 21. Snapchat recently released an AI companion called “my AI,” which is a “personal friend” that users can talk to. This AI is pinned above users’ actual/human friends, and cannot be removed without paying for a monthly Snapchat+ subscription.
However, Snapchat has received pushback for its AI, after users discovered it could lie in conversations. A user posted to TikTok their experience with the AI, saying it didn’t have access to their location, then later correctly gave directions to the closest McDonald’s. Others are concerned with how “my AI” will affect the millions of Snapchat users who are under 18.
Grammarly, a popular grammar-correcting website has also released an AI that helps users improve their writing, or will create new writing based on prompts the user gives it. One major concern with AI in education is the ability to easily plagiarize writing. While USD has not created any specific guidelines around the use of AI chatbots, plagiarism is taken extremely seriously and — according to USD’s honor code — can result in, “reduction of grade; withdrawal from the course; a requirement that all or part of a course be retaken; and a requirement that additional work be undertaken in connection with the course” and in some cases, result in academic probation, suspension and expulsion.
USD Professor of Communications Dr. Nikki Usher described how AI had changed the way students could commit academic dishonesty.
“Fall semester, I was worrying about essays being bought off the internet—hard to police, but usually so off-topic that students do poorly anyway. Spring semester, the rules of the game have changed entirely: As I finish the term, I must be on-guard against whether a machine has written my students’ papers for them.” Dr. Usher described in their article in Slate magazine.
USD Professor of Philosophy Dr. Matt Zwolinski explained his experience with AI plagiarism this semester.
“I’ve seen at least one paper that strikes me as quite likely produced by artificial intelligence. And so I’m trying to figure out how to have to deal with that. It’s not like the old plagiarism,” Zwolinski said. “In the old days, like last year, if you suspect that the student plagiarizing, the way you address that was you found the text you thought was plagiarized, you found a source that contained identical text, and then you put them next to each other, and you show them to the student said, ‘look, this can’t be an accident.’ With AI, you can’t really do that.”
Dr. Zwolinski expanded that USD hadn’t released official guidelines on how to deal with AI plagiarism yet, however the Center for Educational Excellence had provided resources for professors to learn about AI in the classroom.
“We’re still very much the discovery phase as a college, where we’re trying to understand the phenomenon trying to brainstorm different ways of responding to it. And lots of different professors are trying out lots of different things.” Dr. Zwolinski explained.
Although ChatGPT can be used to commit plagiarism, there are other, useful ways they can be employed by students that don’t break the honor code.
USD junior Natalie Nguyen explained how she employs ChatGPT to assist with schoolwork as a STEM student.
“If it’s too much to read the textbook, I’ll sometimes ask ChatGPT to summarize things for me.” Nguyen explained, “It’s great for solving [practice] math problems, it explains it to you, so it’s nice.”
Students’ majors affect their perspective of how ChatGPT can be utilized for school work. USD junior Will Stefanou said that, as a Communication student, he has never used ChatGPT, as he values uniqueness in his writing.
“I’m never going to use ChatGPT; first of all, I don’t want to get in trouble for using it, and it’s just not how I write. I want to write how I write.”
Although ChatGPT can be helpful for providing information, it is not always correct. The New York Times reported that ChatGPT — as well as Google’s Bard and Microsoft’s Bing chatbots — all suffered from so-called “hallucinations,” making information about an event that didn’t happen. The Times reported these “hallucinations” occurred, “because the internet is full of untruthful information, the technology learns to repeat the same untruths. And sometimes the chatbots make things up.” The “hallucinations” demonstrate that AI is not without flaws.
AI technology learns from itself and will continue to improve as time moves on. Some worry that AI’s capabilities will eliminate the need for some jobs. USD junior David Amano doesn’t believe AI has that ability yet.
“AI’s not solving its own issues; right now it’s scraping off the internet and applying that and summarizing that, so right now it’s paraphrasing other humans’ work. So when it starts to solve problems, then maybe it’s an issue, but for now it’s just a tool.”
Now that AI technology has been released, it will be a part of the academic world forever. As with all new technologies, colleges will need to learn how to adapt to the changes.