The Daily Gamecock

Column: Using AI is not cheating

If a chatbot can ace an assigned essay in 30 seconds, that’s not a scandal about students — it’s an indication of the assignment's quality. AI tools are often blamed when the real problem is our grading system. For as long as degrees reward fact recollection, software that automates recall will win. The solution to this isn’t bans and detectors, it’s changing what counts as work: framing questions, judging evidence and producing original synthesis.

One of the headline studies behind the “AI makes you dumber” claims is a June 2025 MIT Media Lab preprint that used an EEG while 54 people wrote essays — either with a large language model, a search engine or “brain-only.” The LLM group showed weaker connectivity and lower performance. But even the project’s own overview stresses that it’s early, small-n and not peer-reviewed. Interesting? Yes. A referendum on learning at scale? No.

Unsurprisingly, better-designed studies point the other way when AI is aligned with learning goals. In a randomized controlled trial in an undergraduate physics course, a purpose-built AI tutor led to significantly more learning in less time than a parallel active-learning class, with students reporting higher engagement and motivation. A separate meta-analysis of 51 studies found overall positive effects of ChatGPT on learning performance and higher-order thinking, especially when its use is integrated into coursework.

A systematic review likewise reports increases in performance and motivation alongside caveats about methodology. The pattern is consistent: when AI is woven into the task and students must do something non-trivial with it, outcomes improve.

Moral panics about learning tools are not new. Socrates warned that writing would not be "of memory but of reminding” and the “semblance of wisdom.” Calculator debates raged for decades before policy settled on balanced use. Early internet adoption in schools was accompanied by ubiquitous filtering rather than teaching for open information. The pattern is structural: new tools arrive, institutions fear atrophy and only later update what "counts" as intellect.

Meanwhile, the platform race has already made advanced AI widely available to students. Google is offering a year of Gemini Pro through its official student program, which will include NotebookLM and other tools; OpenAI advertises a student discount pathway; Perplexity provides at least a month of Pro free with referrals stackable up to two years; Anthropic has rolled out Claude for Education and subsequent higher-ed initiatives

The broader backdrop: ChatGPT became the fastest-growing consumer app in history, and mainstream forecasts, for example, Goldman Sachs, anticipate substantial productivity effects alongside disruption. Institutions are not choosing whether students will use AI; they’re choosing whether learning will be designed for the world students already inhabit. 

That choice shows up in procurement and configuration. The University of South Carolina announced a first-in-state rollout of ChatGPT Edu for the whole campus, framing it as secure, enterprise-level access. The chief information officer’s guest column assures students that individual chats cannot be viewed by faculty or administrators and says access to activity would require a legal process such as a subpoena — mirroring OpenAI’s enterprise commitment and business data pages that emphasize no training on Enterprise and Edu data by default. This is the right direction on privacy, but privacy alone is not a curriculum.

Where campus deployments often stumble is in capability. Enterprise and education workspaces give administrators granular admin control over features and connectors; they can also disable the built-in web search for Enterprise and Edu and advanced agentic tools such as deep research. A campus license that disables the very capabilities students will face on day one in internships and jobs, such as code analysis, retrieval connectors, agent modes, image/vision and robust web research, turns a living technology into a locked demo. That is a configuration problem, not a student problem. 

Detectors are another institutional detour that punishes the wrong people. Peer-reviewed work shows common “GPT detectors” are brittle and biased against non-native English writers; investigative reporting has traced false positives that disproportionately hit international and neurodivergent students. Building policy around unreliable classifiers encourages concealment rather than transparent, teachable use, and it increases inequity.

pq-ai-notcheating.png

Across studies and deployments, the signal is less about whether AI is used than what the task is asking for. When assignments reward recall and template prose, automation dominates; when tasks require framing, evidence-weighing and synthesis, human judgment remains visible, and outcomes improve. Access is already wide, privacy assurances are improving and detectors remain unreliable — so the binding variables are assessment design and system configuration.

Institutions are, in effect, choosing which skills their measurements surface. Blocking core capabilities creates a mismatch with the environments students enter; leaning on AI detectors shifts risk onto students without changing incentives; privacy policies build trust but don’t define learning. The practical boundary of “what counts as intellect” is being redrawn not by proclamations, but by the affordances we enable and the performances we grade.

In 2023, ChatGPT became the fastest-growing consumer app in history, signaling a major shift in technology. The next step in this evolution is the rise of "AI agents" — systems capable of autonomously completing complex tasks, not just responding to prompts. Already, companies like Anthropic are moving these agents from tech demos to the real world through SDKs and campus pilots. This genie isn’t going back in the bottle — and that’s fine. Education’s job is not to bottle genies. It’s to teach people what to wish for.


Comments

Trending Now




Send a Tip Get Our Email Editions