Publicly accessible artificial intelligence software like ChatGPT has raised many questions for educators around cheating. The easily accessible breadth of knowledge and the ability to answer specific questions makes AI a valuable tool for students. High schools and universities alike grapple with how to prevent students from using the software on assignments.
Almost a full year after the launch of ChatGPT, free AI detectors are popping up on the internet. In October, the American Federation of Teachers, the second largest teachers union in the country, partnered with an AI detection software, GPTZero. Ironically, GPTZero uses AI to detect AI. This partnership is intended to create a version of GTPZero that will be accurate and accessible to educators.
Both parties acknowledge that AI is not necessarily the enemy and can be used as a learning resource. They both want to encourage the ethical and responsible use of AI in the classroom.
AI detectors are not foolproof tools. Another popular AI detector, Turnitin, boasts a 98% accuracy rate for AI detection. This still means there is a chance students’ papers get flagged as AI generated when they are not. Turnitin’s executives have said that the intention of their software is not to ban AI.
“Treating AI purely as the enemy of education makes about as much sense in the long run as trying to ban calculators,” Turnitin executives said to the Washington Post.
Wake Forest University’s AI policy allows professors to dictate what type of AI use is permitted in their classrooms. To help guide professors in crafting AI policies, the school is actively releasing a series of blog posts. The first of these posts focuses on developing syllabi with AI policies and is authored by Betsy Barre, the executive director of the Center for the Advancement of Teaching.
Wake’s policy surrounding AI is that professors should choose their policy based on the specific needs of their department and class. Thus, there is no uniform policy on AI.
“Our leaders have chosen (wisely, I think) to hold off on such a policy until we have a better understanding of these tools and the various ways our faculty and students are using them,” Barre wrote in one of the blog posts.
Without a uniform policy, professors opened up discussions during the first week of classes to get feedback from students on their ideal AI policies. One of the recommendations Barre makes in her blog post is to try using alternate grading models so students feel less pressure to cheat.
I emailed Barre shortly after reading her blog post.
“Professors across higher ed [sic] have been talking about alternative grading models for quite some time,” Barre said. “We are hosting a reading group this semester with a group of faculty on the new book Grading for Growth.”
AI is becoming a more prominent part of learning each day. Around 40 percent of students across the country use AI for their schoolwork with the number rising as students gain familiarity with the software. Wake and many other institutions are preparing to adapt as further AI challenges spring up.