Review our Monthly Schedule of Teaching & Learning Opportunities
AI in the Classroom Policy
SLC approved an AI policy for educational use in September 2024. Below are details of the policy as well as guidance on academic integrity and AI content detectors.
May experiment and encourage use of AI tools to support student learning
Must apply critical thinking to evaluate all AI tool output
Must refrain from entering sensitive information into any AI tool
This precludes the use of recording tools such as OtterAI
Must confirm with their instructor whether AI usage is acceptable in the course
Apply critical thinking to evaluate all AI tool output
Acknowledge and cite use of all AI tools as required
Recognize failure to adhere to the above may be deemed academic misconduct
Should determine if AI usage is appropriate in a particular course or activity
May discuss the role of AI if appropriate in a particular course
Must communicate to learners if and how AI tool usage is permitted
Must guide students in usage and citation of AI tools (if appropriate)
The unapproved or inappropriate use of GenAI tools by learners may be considered an infringement of academic integrity and viewed as cheating or plagiarism. Our SLC Academic Integrity Policy is linked here for further reading.
To mitigate unapproved or inappropriate usage, it is recommended that all educators:
explore GenAI tools to better understand their capabilities
'stress test' existing assessments to identify areas for process redesign or tool integration
craft a course-level GenAI policy detailing what tools/level of usage is acceptable (if any)
include assessment-level guidelines for tool usage (if appropriate) as part of all assessment instructions
modify assessment rubrics to account for inappropriate usage
engage students in conversations regarding the appropriateness or inappropriateness of GenAI use
For guidance with any of these steps, please book a 1on1 with the SCTL.
Multiple studies have demonstrated the fallibility of detection tools due to their bias against ESL students, inaccurate labeling of content as being AI-generated, and general inability to consistently function as promised. Please see the articles below for further reading.
In brief, content detectors work by analyzing patterns in text. Two key metrics are perplexity (predictable word choice) and burstiness (variations in sentence length and structure). Both of these can easily by modified by prompting.
What the content detectors are more than likely flagging are underdeveloped writing habits. And as detectors improve, so do the AI writing tools. It's a never-ending game of cat-and-mouse.
Game of Tones: Faculty detection of GPT-4 generated content in university assessments (Journal of Academic Ethics, 2023)
GPT detectors are biased against non-native English writers (Arxiv, 2023)
Navigating the Shadows: Unveiling Effective Disturbances for Modern AI Content Detectors (Arxiv, 2024)
Perception, performance, and detectability of conversational artificial intelligence across 32 university courses (Nature, 2023)
Testing of Detection Tools for AI-Generated Text (International Journal for Educational Integrity, 2023)
Have questions about GenAI and academic integrity? Book a 1on1 with the SCTL.