An ₹6,68,000 tuition refund demand, AI-generated lecture slides with strange images, and a professor caught breaking his own rule Northeastern University finds itself at the center of a growing debate over artificial intelligence in higher education.
Caught in the Act: Professor’s AI Use Exposes Double Standard
At Northeastern University, a quiet classroom controversy has erupted into a national conversation about ethics, transparency, and the use of artificial intelligence in academia. The center of the storm is Professor Rick Arrowood, a faculty member in the College of Professional Studies, who admitted to using AI tools including ChatGPT, Perplexity, and Gamma to prepare his lectures.
Arrowood’s secret didn’t stay hidden for long. Business student Ella Stapleton noticed inconsistencies in the lecture materials: misspelled words, AI-generated images with anatomical errors like extra fingers, and even a direct mention of a ChatGPT query embedded in the class notes. The discovery felt like betrayal. “He’s telling us not to use it,” Stapleton told in an interview“and then he’s using it himself.”
Feeling cheated, Stapleton filed a formal complaint and demanded an $8,000 refund from the university an amount equal to her course tuition. After multiple meetings with the administration that continued until her graduation, the university ultimately refused to issue a refund, maintaining that course content had still met academic standards.
When Professors Use AI but Students Can’t: A Policy Grey Zone
The Arrowood incident has raised an uncomfortable question that many universities are still grappling with: Who is allowed to use AI in education and how?
Across classrooms, most academic integrity policies discourage or outright ban students from using generative AI tools like ChatGPT to complete assignments or exams. Yet many educators are quietly incorporating the same tools into their teaching be it for drafting syllabi, designing quizzes, or even generating lecture materials.
ALSO READ: FCRF Launches Campus Ambassador Program to Empower India’s Next-Gen Cyber Defenders
Arrowood, for his part, admitted he should have been more cautious. “In hindsight, I wish I would have looked at it more closely,” he told. While he defended his intent to use AI to make the lectures more “engaging,” he conceded that faculty should be transparent about such usage moving forward.
Some fellow educators argue that professors, as professionals, can responsibly incorporate AI tools into their workflow as long as they ensure accuracy and uphold academic standards. But others see a glaring double standard. “It’s the height of hypocrisy to penalize students while reaping the efficiency benefits yourself,” said Dr. Lena Morales, a digital ethics expert at NYU.
The Future of AI in Classrooms: Transparency, Training, and Trust
As AI tools become more deeply embedded in academic workflows, universities are now under pressure to redefine their guidelines and expectations. Northeastern has yet to release an official policy or statement in response to the Arrowood incident, but conversations among faculty, administrators, and students are intensifying.
Ethicists and education researchers argue that AI literacy must evolve into AI accountability. Universities need clear, consistent guidelines that address both student and faculty use, ideally developed with input from all stakeholders.
Stapleton, now a graduate, says she doesn’t regret speaking up. “Even if I didn’t get the refund, at least it got people talking. That’s a start.”
Arrowood, too, appears to have learned from the episode. “If my experience can be something people can learn from,” he said, “then, OK, that’s my happy spot.”