ChatGPT’s Hall of Mirrors
By Sydney Umstead, Asst. News Editor
Artificial intelligence (AI) has made its way into academic conversations as professors compete in a rat race against the prospect of students using sites like ChatGPT as a loophole.
ChatGPT allows for people to put in prompts such as “Write me a paper about the French Revolution.” This tactic creates difficulty for professors when trying to figure out whether or not an assignment is authentic.
Dr. Stephen Chanderbhan, associate professor of philosophy, has updated his courses for a multitude of reasons, one being the creation of this software. He stated, “It really is not helpful to the development of a critical thinker to not have to sit down and really try to work out an argument.”
While Dr. Chanderbhan stated he is not “super paranoid about bunches of students using AI to do tests,” it does introduce difficulties like “outsourcing our ability to think for ourselves.”
Previously, students who took some of Dr. Chanderbhan’s courses would receive online and open-book tests, but now the online portion focuses on things like finding quotes or critical thinking.
The philosophy department at Canisius has faced a cut in professors. When asked if it is harder to catch AI inside a smaller department, Dr. Chanderbhan noted his experience last semester with students plagiarizing assignments, and adjudicating those instances, citing the emergence of AI in the classroom as a “secondary effect.”
“Switching from a format that would kind of make that type of cheating much more possible to a type of format where that cheating is less possible is a kind of labor-saving strategy,” said Chanderbhan.
Dr. Chanderbhan also took a philosophical approach in understanding some of the shining appeal AI has to students. He considered that, “It’s a certain set of skills, namely, looking stuff up and manipulating characters on a page.”
Dr. Chanderbhan noted “maybe in a dystopian future we can imagine where we can outsource all of our thinking into something like that,” and said, “if we do let it happen, we will have lost something, I think.”
The English department presently faces issues with toggling student writing and AI’s composition. In a conversation with Graham Stowe, Ph. D, of the Writing Center, he mentioned how artificial intelligence is taking away from a key part of the college experience.
“My bigger, overarching concern is that students using it for their work are missing an important part of college, the part where they learn to think about the world in a more sophisticated, nuanced way,” said Dr. Stowe.
Dr. Stowe and his colleagues are currently navigating ways to address the usage of AI such as updates to the Honor Code; however, he emphasized, “It’s important that students know, though, that using AI is already covered under our current honor code as it’s written: it is cheating to use it to do your work.”
While no decision has been made, there has been talk about more in-class writing, as AI has a way of going undetected throughout plagiarism detectors, excluding Turnitin, though they will not reveal how they’ve managed to create it.
As AI grows more advanced, there is a conversation surrounding what it can build both in terms of human art and thought.
For Dr. Chanderbhan, “If there is anything about ethics that is not formulaic, it’s not clear to me that AI can replicate that,” as he also spoke on the fact that “AI might make an absolute judgment where there is really a lot more nuance to the situation.”
In terms of art, Dr. Stowe expressed that AI has a way of taking digital art from the original creator, and “the algorithm that creates these pieces is populated by work created by real people, and they’re not getting paid for the new creation.”
As of right now, the poetry AI creates is not yet comparable to real work, as Dr. Stowe asked it to write in the style of William Carlos Williams, and it produced “a really terrible imitation.” However, AI does have the potential to expand.
Both professors discussed how AI could still be of use for students to study; however, they expressed that it should not be the backbone of someone’s work, and it has the potential to be dangerous when using it to do so.