Why AI Detectors Don't Work and What We Can Do About It
- Center for Teaching, Learning, and Technology
- Aug 19
- 1 min read
Updated: 11 minutes ago
Presented by Greg Longo, Kirstie Richman, Jennifer Hennessy-Booth, Jamie Andrews
As AI tools like ChatGPT have become widely adopted by students, many faculty and institutions initially turned to AI detectors to maintain academic integrity. However, growing concerns about reliability, false positives, and ethical issues have revealed the limitations of this approach. With Eastern’s new policy; uncoupling AI and academic integrity and the unreliable nature of detection tools, faculty may feel frustrated about addressing unauthorized AI use. This session moves beyond the problems to provide practical, pedagogically sound solutions.
Part 1: Understanding Why AI Detectors Fall Short We'll begin by examining how AI-detection tools work and why they often fail, exploring issues of reliability, false positives, and ethical concerns. This foundation helps us understand why detection-based approaches are problematic and why alternative strategies are needed for both traditional and flex education contexts.
Part 2: Human-Centered Assessment Alternatives Building on this understanding, we'll shift focus to learning-centered approaches that emphasize humanness criteria in assessment. Rather than trying to catch AI use, we'll explore how to design assignments and rubrics that authentically assess student learning and encourage genuine engagement. This collaborative segment will provide concrete suggestions while promoting discussion among participants.