Learning in an AI world From public schools to post-secondary institutions, educators are grappling with AI's impact on education
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$1 per week for 24 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.95 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.99/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19.95 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Free Press access to your Brandon Sun subscription for only an additional
$1 for the first 4 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Your next Brandon Sun subscription payment will increase by $1.00 and you will be charged $17.95 plus GST for four weeks. After four weeks, your payment will increase to $24.95 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Be responsible, not afraid:
Winnipeg School Division sets framework for AI in classrooms
Manitoba’s largest school division only has one fixed rule for its staff and students when it comes to using artificial intelligence.
“Don’t put personal or confidential or sensitive information into one of these robots,” said Matt Henderson, superintendent of the Winnipeg School Division.
Given how rapidly this technology is evolving, Henderson said there’d be little value in creating other hard-and-fast rules.
Instead of banning AI programs, the board office is promoting the responsible use of AI-powered chatbots and related tools.
Its new framework’s overarching principle — “AI-assisted, never AI-led” — was informed by feedback from staff and student families.
“You have to embrace it. You can’t fear it,” said Becca Koenig, a student teacher who is doing her inaugural University of Manitoba practicum in the division.
“(Our students) are going to use it, so we might as well teach them how to use it properly.”
Several months into her after-degree education program, Koenig has found U of M professors rarely outlaw AI chatbot-related support in its entirety.
Faculty members have made clear when, why or how it can be used, if at all, she said, noting that generative AI is typically allowed during the brainstorming stages of projects.
The 27-year-old said she is setting a similar tone with her middle years students at Earl Grey School, one of about 80 buildings in WSD.
The division’s AI framework states it aims to support teachers to design lessons that are more equitable and engaging.
Henderson noted that some parents have urged WSD to ban AI in recent months. At the same time, teachers have expressed concern that schools failed to think proactively about how phones and social media should, if at all, intersect with education, the superintendent said.
“The horse has left the barn on those. How do we make sure that we’re being really responsible and intelligent about artificial intelligence?” he said.
Senior administration has sought input from its 6,000 employees, including teachers, custodians and clerks, about how they‘ve been using AI since the start of the school year.
The board office invited families to a Sept. 17 open house at Technical Vocational High School to discuss the topic.
Russell Miller, a teacher and father in central Winnipeg, described AI as “a juggernaut” unlike anything else he’s used in his 22-year career.
The Grade 4 teacher at Greenway School has adopted Google Gemini as a virtual assistant and in-house translation service.
Miller is currently using it to create vocabulary charts featuring English and Arabic words, as well as phonetic pronunciations, for a newcomer student from Syria.
One of his proudest moments from last year was helping his students produce “Bill-Nye-style songs” about ligaments, nerves and other anatomy terms via Suno, an AI music creator.
“Once you do a bit of the hard work and learn how you can utilize it, yes, AI saves time — but it also just makes you a better teacher,” he said, noting that it helps him better tailor lessons to his students’ diverse interests and skillsets.
Miller has completed division-run professional development, Google training modules and a microcredential on generative AI (GenAI).
WSD recently renewed its partnership with Red River College Polytechnic to train a second group of teachers to leverage AI-powered chatbots and other tools that fall under the GenAI umbrella.
Koenig, the student teacher, keeps the ChatGPT app handy for when she’s in a creative rut.
She recently sought ideas for a fun, beginner and festive art project that was “flight-themed” when she was teaching a science unit on aviation.
The chatbot proposed drawing aurora borealis with Santa Claus’ sleigh in the forefront.
In response, Koenig tracked down oil pastels. She also found herself looking up videos of drones flying in the northern lights and researching this phenomenon’s historical significance to Indigenous people.
WSD’s new framework calls on teachers to “ground every AI use” and reflect on whether their usage “serves reconciliation and human flourishing.”
Koenig said she was proud of how her interdisciplinary unit on flight turned out, and how she was able to incorporate Indigenous perspectives into it.
“It hit all the points that I wanted to cover,” she said. “And I don’t think if I would’ve thought of it if it weren’t for ChatGPT.”
maggie.macintosh@freepress.mb.ca
University professors struggling with practical, ethical consequences of artificial intelligence for them, their students, the future of higher learning
It’s almost funny to Roisin Cossar now, looking back on that day in December 2022, when she sat down with her university students to test a new artificial intelligence program called ChatGPT. Cossar, a longtime University of Manitoba history professor, had heard about it from a colleague, and was curious about what it could do.
So they sat there, tapping questions into the ChatGPT text box, marvelling at how it quickly spit out answers in fluid, even conversational wording, almost like a human would do.
“We were like, ‘Geez, look at this, it actually makes paragraphs and, you know, writes coherently,’” Cossar says.
Now, looking back: “I just think, ‘oh man,’” she adds, with a laugh.
Three years later, artificial intelligence — particularly large language models such as ChatGPT — has triggered a seismic shift in academia, sending universities racing to catch up, and changing the work done by Cossar and her colleagues forever.
Students’ use of AI to write essays or pass tests is skyrocketing. So are the resources invested into trying to catch it.
Faculty wrestle with how to adapt courses to make it more difficult for students to use AI to cheat assignments and sidestep the learning process. Universities convene committees to debate where to draw the line. And all the while, the technology is improving at lightning speed, becoming capable of doing more things and better at evading attempts to detect it.
Making matters even more complex, there is no one-size-fits-all solution, no one standard that suits every field of study.
“One of the big problems is that with all these different disciplines that are widely different from one to the other, we can’t deal with them all in the same way,” says Jenna Tichon, U of M Faculty Association vice-president. “There’s just so much of this urgent need, it’s kind of like bailing out a canoe at some point.”
Of course, cheating isn’t new. But the ubiquity and capabilities of AI present unique problems.
The first is that, while many users trust AI’s results — its conversational fluidity tends to convey a certain authority — it is often flat-out making things up. Large language models such as ChatGPT are famous for producing “hallucinations,” entirely fictitious statements that, at a glance, look realistic.
In academic papers, these manifest as references to historical events that didn’t happen, scientists who don’t exist, quotes that were never said. Large language models such as ChatGPT also have a tendency to fabricate citations, making realistic-looking references to non-existent studies or inaccurately summarizing those that do exist.
A new study, published last month in the scientific journal JMIR Mental Health, found that when ChatGPT 4.0 was asked to produce work on psychiatric medicine similar to academic papers, nearly two-thirds of its citations were either entirely fake or contained at least one error.
Those types of mistakes are the most reliable “tells” for AI cheating, but they are laborious for faculty to detect.
“There’s no reliable way to do it, unless students are doing very obvious things,” Tichon says. “People are spending more time carefully going through citations to do more auditing of, ‘Are these even real?’”
Beyond these nuts-and-bolts concerns of accuracy and fact, there are much larger and more troubling questions. At its core, rising AI use scrapes against the underlying philosophy of higher education: what is the point of academia? What is the goal of a university education? What, fundamentally, are students there to do?
If the point was simply to get a degree, bought-and-paid, then AI use might not matter. But university is supposed to have a higher purpose: teaching students how to think, how to make sense of complex information and how to synthesize that into their own approach to their field’s pressing questions.
“I’m very concerned about the cognitive skills that might be lost,” Tichon says. “Because some of the things that students, but also society, businesses and workplaces, in general, view as low-level work, I actually view as incredibly valuable skills. Being able to read a paper and summarize the main points, that’s an important brain muscle that you need to be able to develop.
“A lot of people will say, ‘Well, I’m just using it for idea generation,’” she continues. “But creativity is also a muscle that you build and flex, and I worry that spark of imagination is going away. If you lose those skills, at some point you’re completely limited to what the AI model is able to give you.”
And over the long term, if academia becomes too thoroughly saturated with AI use from both students and faculty, will that undermine the value of a degree? Some warn that it will. As the headline of a recent article in the magazine Current Affairs magazine put it: “AI is Destroying the University and Learning Itself.”
That statement may be frank, but it is not overly alarmist. Once a degree is no longer reasonable proof that a student learned how to think, analyze information and compose the work of their field on their own, then the entire endeavour is called into question and the value for all students, even those who never touched AI, is diminished.
Yet each year, more students turn to it to get through their courses. But why?
To Cossar, getting to that “why” of students using AI is important — in some ways, more important than sniffing out when they do. After all, when students enrol in a course, they generally have an interest in the topic and a desire to grow their knowledge. If they’re opting out of that process, finding the disconnect may help solve it.
“We cannot simply try to AI-proof our classrooms, because AI is going to morph and change,” she says. “People feel really overwhelmed by that, by the speed with which it’s changing. You think you’ve got a sense of how students use it, and then there’s something else.
“So if we emphasize policing and detection, I think that’s the wrong avenue to take. It’ll burn us all out, and it’ll create a real sense that our students are the enemy, which they are not. It’ll keep us from thinking about the best ways to teach the discipline that we work in.”
Over the last year, Cossar’s department reviewed several dozen academic misconduct cases dealing with suspected AI use. In many cases, the students quickly admitted to it. When Cossar, who is department head, asked why, what she heard was instructive. Many were simply overwhelmed by stressors, personal and academic.
“It’s kind of heartbreaking,” she says. “They’ll say, ‘Yeah, this is exactly what I did,’ and they’ll tell you the circumstances in their lives that brought them to that, which is something that’s been useful for me. Often, it’s not that they intended to do this, that they were gonna pull the wool over everybody’s eyes. They just got backed into a corner.”
Another common refrain: the students wanted to do the work, but didn’t know how.
“I’m seeing students who’ve been handed things to read, and they don’t know how to do the reading they’re asked to do,” Cossar says. “They’re faced with scholarly writing in language that’s not pitched for them, and nobody has talked to them about that.”
To Cossar, that insight illuminates a path forward. If students more often turn to AI when they can’t find the way into class material, then perhaps changing how the courses are taught can extend a more navigable ladder, without compromising on academic rigour.
“Sometimes I think about how challenging it is as an undergraduate student to just sit in a lecture and have just words just wash over you,” she says. “And nobody kind of stopping and saying, ‘Let’s really get into this concept and try and figure this out together.’ When people do that, students respond, but otherwise it can be very overwhelming.”
For Cossar and other professors, that means re-imagining how they teach, and how they assess student progress. It means looking for adaptations that are both more of a barrier for AI — such as fewer open-ended essay prompts, which ChatGPT generally handles well — and more exciting and accessible to students.
What those adaptations look like varies. In the history department, some professors are shifting to more hands-on learning: one of Cossar’s colleagues got students making stained glass in a relevant course. Another had students hand-transcribe 19th-century letters before writing about the contents, which many later praised as one of the highlights of their year.
Cossar points to how she more often asks students to read a work in class, and physically annotate it in pen.
“When I was talking with students who’d used ChatGPT to write their essays, they’d often talk about the fact that even when they were trying to write without using (it), they weren’t doing any kind of note-taking,” she says. “They didn’t have that skill. So this is about showing them that, ‘Look, there’s an intermediate process before you write any prose.’”
In the U of M’s computer science department, associate head John Braico has taken a similar approach, turning to “small, lightweight” activities that evaluate how students are grasping the material, step-by-step. Traditionally, he says, computer science classes focused more on big assignments, requiring many hours of work.
“That puts a lot of weight on that one assessment, so students feel obligated to do well in it,” he says. “And if they aren’t able to get it, then they’re going to fall back on the tools (such as AI), because they think that’s the only way that they’re going to be able to get a good mark. But they don’t understand the material, as a result.”
Amongst all academic fields, Braico’s stands at a unique juncture when it comes to navigating how students use AI in their studies. Some may use it to cheat: ChatGPT and other large language models can write software code, although just like with history essays, they spit out a lot of junk. Professors often have a hunch when students used it, though it can be difficult to prove.
At the same time, while professors in some fields consider bans on AI, that doesn’t make sense in computer science, where students have to understand how it works, how they can harness it in their careers and even how to shape and construct it.
“We’re in kind of an interesting position, because our graduates create this technology,” Braico says. “We have a demand to integrate it into our program. There’s legitimate application for the technology. At the same time, we graduate students who go on and get professional jobs in software development, and we want to make sure they have fundamental skills.
“So we can’t just say, ‘OK, we’ll use it everywhere,’ because then they don’t learn those fundamental skills.’”
With this in mind, computer science faculty have looked to strike a balance. In early year classes, AI use is discouraged; but in more advanced classes, once students have built up their skills and can better distinguish good code from bad, there is more leeway to utilize it in their large, professional-style assignments.
“At that point, we say, ‘You can use these tools if you want to, but you’re responsible for the quality,’” Braico says.
Braico is a self-described “AI skeptic.” What he means is that, although the technology has shown explosive growth in the last few years, he believes the pace is plateauing. That said, he emphasizes that there is a vigorous debate about that, and many of his colleagues disagree with him.
If the technology does plateau, it means the worst popular fears about how it could damage academia and job markets may not come to pass — and that we have more time to adapt. Braico notes that many high school students thinking of entering computer science worry that AI will eradicate software jobs, but he doesn’t think they will disappear.
“Having seen these kinds of cycles or shifts in the past… I don’t think that this one is particularly different,” he says.
This points to how we perceive AI. To most people, it’s an opaque technology, something that burst onto our everyday life almost overnight. It operates in ways we don’t understand, interacting with us and our world in ways we had no idea that machines could. That mystery makes it both wondrous and terrifying.
But to Braico and others with a deeper technical grasp of how those systems operate, the veil of mystery falls away, and the limitations become more apparent. Perhaps that makes AI seem both less awesome and infallible, and also less frightening. Braico even sees his own students coming to that perspective, as they advance.
“They start to see the weaknesses of these tools, and then they start to become more interested in how can you properly integrate them into our professional work,” he says. “By that point, they’re starting to see the difference between an AI that can write their first-year program for them, and the fact that it’s not going to build a complex system all by itself.”
In other words: knowledge is power. And that may also explain why Cossar is something of an optimist, when it comes to how AI will impact learning in the future. She’s had many talks with colleagues about how difficult the problem has become, and how challenging it is to navigate; she knows some professors are wrestling with a gnawing sense of despair.
Yet she is enthusiastic, and that comes from what drew her to the job in the first place, she says. Being in the classroom, working with students, teaching them what it means to study history and its value. Her work has always been finding ways to best make those connections. In that regard, AI is just another challenge to grow through.
“I hesitate to say (rising AI use) is a good thing,” Cossar says. “But the learning part of this moment is if we can stop doing this other stuff that was alienating people, and start to really focus on how do we work in the classroom as a community?
“I really think that this is a wake-up call for us. If we can help students when they face that wall of text, go, ‘OK, how do I actually look for things like hierarchies of meaning? How do I break it down and find what’s really important here?’… I actually find it kind of energizing to think about what I could do differently to make things better.”
melissa.martin@freepress.mb.ca
Maggie Macintosh
Education reporter
Maggie Macintosh reports on education for the Winnipeg Free Press. Funding for the Free Press education reporter comes from the Government of Canada through the Local Journalism Initiative.
Our newsroom depends on a growing audience of readers to power our journalism. If you are not a paid reader, please consider becoming a subscriber.
Our newsroom depends on its audience of readers to power our journalism. Thank you for your support.