The Daily Gamecock

Column: USC's AI policy is unclear. And that's a problem

<p>A photo illustration of a computer screen showing tabs open for Chatsonic and Blackboard Learn on Oct. 4, 2023. Chatsonic is an AI-powered chatbot developed to rival ChatGPT.</p>
A photo illustration of a computer screen showing tabs open for Chatsonic and Blackboard Learn on Oct. 4, 2023. Chatsonic is an AI-powered chatbot developed to rival ChatGPT.

In an age of rapid technological growth, artificial intelligence is proving to drive change in multiple sectors, including education.

AI tools commonly used in educational environments — such as ChatGPT, Dall E and Packback — have pressed many colleges to take a stand and implement policies for students' use of AI.

But USC's current policy is riddled with ambiguous language that can cause confusion and lead to an uneven playing field among students by relying on existing rules for plagiarism and cheating.

The university has no plans to change that in the near future, according to university spokesperson Jeff Stensland. 

"There are no new policies being created specifically addressing AI at this time. Issues that may arise concerning academic integrity and plagiarism are covered by existing policy," Stensland said in a text to The Daily Gamecock.

AI tools can serve as powerful, interactive platforms that enable students to gain knowledge, engage in educational discussions and get immediate feedback, according to the American Psychological Association.

"There are many definitions and many different ways you can look at it. But the most practical way is to think about them as intelligent agents, systems which can help you in making decisions," Biplav Srivastava, a College of Engineering and Computing professor, said.

university ai pull quote.png

But while these new technologies enhance education, they also raise ethical, moral and educational challenges. USC's current AI policies need to be more straightforward and offer clear guidelines to address such challenges, including academic integrity risks from technological misuse and reduced student interaction with instructors and classmates.

"The university policy, as such, is in a fast-moving area. This is informed by policies that are coming from different communities, including the professional computing community," Srivastava said. 

Srivastava said he was skeptical about the clarity of USC's established guidelines. 

"So under what situations can AI be used? What are the disclaimers? And what can you do?" Srivastava said. 

The university's ambiguous "It depends" attitude regarding AI's alignment with the Honor Code on its Student Conduct and Academic Integrity page throws students into a landscape of uncertainty and shows a lack of institutional guidance. The accusation of "Cheating - Unauthorized Aid" or "Plagiarism - Copying Work" is serious and should not rest on vague terms.

The university's indecisive stance makes the acceptability of using AI tools contingent upon individual instructors. This approach can create inconsistencies across courses and even within the same department. This discrepancy shows the need for a cohesive, university-wide policy to level the playing field and provide clear guidelines for students to follow, preventing unnecessary academic infractions and ethical dilemmas. Students enrolled in multiple courses might mistakenly believe that permission to use AI tools extends to all their classes and commit unintentional academic violations as a result.

Srivastava also said USC administration should work towards university-wide AI courses that increase the comprehension level in such a fast-moving area. 

"UT Austin started a one-unit course across the whole university on AI, so there should be something at our university also," Srivastava said. "It's not just for STEM majors. It could be for people in humanities, in arts, in journalism, in nursing and in business."

A comprehensively structured AI course could teach students about the ethical, moral and academic implications of the technology and provide insights into biases and potential repercussions inherent to AI applications. Students could also learn about the inherent legal, technical and social impacts.

Srivastava said that the university's policy concerning AI is "a first step" but insufficient in the grand scheme of things.

USC must proactively address the evolving needs and concerns of the student body by creating clear AI guidelines and educational programs as the platforms permeate professional spaces.

In doing so, the university can maintain academic integrity while fostering an environment conducive to learning and professional development. This knowledge will empower students to navigate AI thoughtfully and give them the skills to use AI effectively in their professional and personal lives, bridging the gap between technology and responsibility.