The Daily Gamecock

Guest Column: Is ChatGPT a trap? Student liability using AI

For students at the University of South Carolina, using ChatGPT can feel less like using a tool and more like taking a gamble. In the 2025-26 academic year, the University of South Carolina entered into a $1.5 million contract with OpenAI to offer free artificial intelligence tools to all students, faculty and staff. The program was rolled out in the fall 2025 term with the goal of improving students’ academic outcomes and enhancing learning efficiency. However, ChatGPT’s rapid implementation has left the university playing catch-up, regulating how these tools are used in practice. As a result, students are left navigating vague and inconsistent expectations, where the line between assistance and cheating is often unclear. Students face increased academic liability not because of intentional misconduct, but because institutional policies and enforcement methods fail to provide clear, reliable guidance, leading to the common claim that the university’s deal with OpenAI is a trap.

Current AI policy in the classroom

The university’s current AI policy places responsibility on instructors but fails to provide students with specific, actionable guidance. According to the Office of Academic Affairs policy 2.03, updated on Aug. 21, 2025, professors must include a “statement regarding the use of generative artificial intelligence (AI) tools (e.g., ChatGPT).” Furthermore, it states, “The syllabus must make the expectations about AI use in the course explicit, whether limited, encouraged, or prohibited — so students understand what is and is not permitted.” While this expectation lays a foundation for AI expectations, it leaves significant room for interpretation. In practice, one or two sentences in a syllabus are insufficient to guide students on proper AI use for specific assignments. This lack of clarity has tangible consequences, as roughly one-third of the Honor Code cases in the 2024-25 academic year referenced the use of AI.

The definition of cheating

The current definition of cheating with AI is too vague to be fairly enforced, leaving students vulnerable to subjective interpretation. The Office of Academic Integrity claims that the Honor Code is “fully equipped to address academic integrity violations involving artificial intelligence.” However, this confidence is undermined by the university's own definition of cheating with AI, which it describes as “improper collaboration or unauthorized assistance in connection with any academic work." This definition of cheating implicitly relies upon clear expectations being set by professors. Without detailed guidance, the system becomes antagonistic to students in a “gotcha” fashion. Rightfully, students must ask how they can be accused of “unauthorized assistance” if what is authorized was never clearly defined. Furthermore, even if clear guidelines are provided, the nature of AI itself can further blur the lines of cheating. Many tools flirt with the line between assistance and authorship, making students attempting to follow policy in good faith susceptible to accidental violations.

How is the university investigating academic violations with AI?

Although the university claims to evaluate AI misconduct using a “preponderance of the evidence” standard, its investigative process relies heavily on indirect and inherently uncertain methods. In practice, the Office of Academic Integrity has four main tools for investigating AI violations. First, students are directly questioned about their AI usage. Second, investigators utilize document version history to view patterns of creation, seeing if large swaths of text have been copied and pasted, and the length of time spent on the assignment. Third, they generate comparison texts by prompting ChatGPT with the same assignment to establish baseline texts. However, the Office acknowledges that this method is not definitive, but rather a starting point for further questioning. Finally, cases are documented in a searchable database to promote consistency across investigations. Notably, the university does not have access to students’ ChatGPT histories.

Investigation methods increase student liability

While these methods are presented as objective tools, research suggests they introduce significant uncertainty that ultimately works against students. In a Cornell review of AI detection strategies, it was found that instructors correctly identified AI-generated work only 69% of the time, with a false positive rate of 22%. In practice, this could mean that nearly one in four students flagged for AI misuse may be wrongly accused. Compounding this issue, detection accuracy was found to vary widely between instructors; consequently, a student's likelihood of being reported depends not only on their work, but on who evaluates it.

Even the more advanced, process-based methods — such as document history — do not resolve this ambiguity. While recent research does show that process-based evidence is becoming more prominent in effective plagiarism detection, this still doesn’t solve the problem of intent. A student may view their AI usage as within guidelines, but their document history may falsely paint a picture of cheating.

Similarly, the university's usage of “baseline” responses introduces further liability to students. In the aforementioned Cornell study, it was found that when students modify AI-generated solutions or use different prompts than those given in an assignment, AI detection is negatively impacted. This undermines the reliability of comparing student work to standardized ChatGPT outputs, particularly when students have access to a plethora of AI tools. As a result, these investigative methods do not create definitive proof of cheating, but instead create patterns of suspicion, placing the burden on students to defend their conduct in a system riddled with instances of erroneous accusations from the outset.

Heavy-handed punishment for misuse

The punishment for AI-related academic misconduct does not match the nature of the offense, especially given the current uncertainty surrounding proper use. The consequences of violating the Honor Code or syllabi policies for AI use are treated the same as traditional Honor Code violations. Students with one violation are most commonly "placed on 6 months of Conduct Probation and must complete either the Academic Integrity Workshop or Artificial Intelligence Module.” By equating AI misuse with established forms of cheating, the university significantly increases the risk to students attempting to responsibly engage with a rapidly evolving technology. Within a landscape of unclear guidelines and detection methods with high rates of misidentification, these penalties create little space for trial, error, or good-faith experimentation. As students and faculty are still defining what appropriate AI use is, treating missteps as full academic violations discourages learning and adaptation, replacing exploration with risk avoidance.

Damned if you do, damned if you don’t

Taken together, USC has effectively cornered students into a lose-lose situation. If you use AI, you take on significant academic liability, while avoiding it leaves students at a disadvantage in a world where AI is increasingly essential. The university must produce clear, cohesive policies that give students the ability to experiment with AI without fear of retribution, while still upholding meaningful standards of academic integrity to preserve education quality. As long as expectations remain ambiguous, detection methods remain unreliable, and penalties remain severe, students are left to navigate a system where good-faith effort offers little protection. In this context, it is no surprise that many students view the University’s partnership with AI not as an opportunity, but as a trap.

If you are interested in commenting on this article, please send a guest column to sagckopinion@mailbox.sc.edu.


Comments