AS AI ENTERS THE CLASSROOMS, U OF O PROMOTES CRITICAL AI LITERACY, WHILE CARELTON’S CUPE LOCALS PUSH FOR CLEARER SAFEGUARDS TO PROTECT ACADEMIC WORKERS
As artificial intelligence (AI) reshapes university classrooms, both the University of Ottawa and Carleton University are leaning towards an institutional model that emphasizes teaching students on how to use generative AI ethically and critically. However, Carleton University’s unionized academic workers are warning that the technology could undermine jobs and working conditions.
Earlier this term, the U of O hosted “Insights from the AI Symposium: Building inclusive, transparent and innovative AI,” where researchers and educators argued that banning AI outright is neither realistic nor productive. Instead, they say universities must focus on building students’ capacity to understand AI’s limits, verify information, and apply the tools responsibly.
The stance contrasts with concerns raised by members of the Canadian Union of Public Employees Local (CUPE 2626), which represents teaching assistants and contract instructors at the U of O.
The broader CUPE union has echoed concerns raised by CUPE 4600 — the union representing contract instructors and teaching assistants at Carleton University — whose members recently rallied against what they describe as vague and inadequate AI policies. At the protest, members warned that generative AI could be used to justify cutting teaching assistant hours, automating grading, and further casualizing academic work. Protesters called for clearer, enforceable policies that protect workers and ensure that human labour remains central to teaching and assessment.
Carleton’s administration has published guidance encouraging instructors to clearly outline how generative AI can or cannot be used in their courses, linking AI practices to academic integrity, data protection and environmental factors. The university does not ban AI, but places responsibility on individual instructors — a decentralized approach similar to the U of O’s model, but one that unions argue lacks sufficient labour protections.
For David Knox, a professor in the U of O’s School of Engineering Design and Teaching Innovation, the question is how institutions choose to respond to the question of whether AI will reshape higher education.
“They went through phases initially, like many Canadian institutions across Canada — they banned it,” Knox said, referring to U of O’s early response to generative AI. “Now they have… a specific call out for Gen AI in courses and what is academic misconduct. The policy at uOttawa is that it’s the prof. It’s not the same in every course.”
Knox argues that flexibility is necessary because neither universities nor employers have settled on a single, consistent way to integrate AI.
“I don’t think the world has a consistent policy. I don’t think employers have a consistent policy,” he said. Knox observed what he claims is a “polarizing effect” in his programming classes: stronger students tend to use AI in ways that deepen their understanding, while weaker students may rely on copy and paste answers that short-circuit learning.
The dynamic worries him, because it risks widening existing gaps between students. “Better learners will get better learning… and weaker learners will get weaker learning. That doesn’t seem fair,” he added.
Building on that contrast, U of O’s AI symposium framed generative AI less as a threat and more as a literacy challenge for universities.
According to the symposium’s summary, speakers emphasized that students must learn how to think critically about AI-generated outputs, understand that technology’s limitations and verify information for accuracy, rather than relying on it as a short cut. Researchers also stressed the importance of transparency — encouraging students to disclose when and how AI tools are used — and of embedding ethical discussions about bias, data sources, and power into coursework.
The symposium’s tone suggests that AI is already embedded in students’ academic and personal lives, making outright bans difficult to enforce and potentially counterproductive. Instead, participants argued that universities should treat AI much like other digital tools: something that requires instruction, guardrails and ongoing evaluation.
This position aligns with comments previously made by Marie-Eve Sylvestre, the president and vice-chancellor of U of O, in an earlier Fulcrum interview.
When asked whether the university would move toward a centralized AI policy, Sylvestre said, “There’s a lot of changes and innovation in that area… There’s no denying that AI has transformed the way we teach. And so yeah, guidelines. But [at] the end of the day of course, faculty members are… remaining the masters of their courses.”
The interview suggests the U of O recognizes AI’s growing presence in classrooms and its mixed implications, but has stopped short of committing to standardized, university-wide rules. Instead, individual instructors continue setting their own policies, with no centralized regulation around fairness in grading.
Union leaders have pointed to a lack of meaningful consultation with TAs and contract instructors in the development of AI-related guidelines. Without worker input, AI policies risk prioritizing institutional efficiency over fair workloads, job security and high-quality feedback for students.
As AI becomes more embedded in university classrooms, the gap between these two approaches is only getting wider.
At the U of O, the focus remains on helping students learn how to use AI responsibly, even if that means policies look different from one course to another. At Carleton, unions are asking the university to slow down and think more carefully about what AI means for the people who actually do the teaching and marking— all while CUPE members make sure AI doesn’t become an excuse to cut corners, reduce jobs, or sideline human feedback in education.
The Fulcrum reached out to CUPE 2626 for comment but did not receive a response in time for publication.

