Artificial Intelligence

Please review each article prior to use: grade-level applicability and curricular alignment might not be obvious from the headline alone.

AI — when you find your servant is your master

Pam Frampton 5 minute read Preview

AI — when you find your servant is your master

Pam Frampton 5 minute read Wednesday, Mar. 11, 2026

When I was 17 and fresh out of high school, I spent a couple of months with friends in Charlottetown, P.E.I., and landed a summer job at an A&W drive-in.

Read
Wednesday, Mar. 11, 2026
No Subscription Required

Two-thirds of Manitobans using AI, but a lot aren’t happy about it, survey reveals

Conrad Sweatman 4 minute read Preview
No Subscription Required

Two-thirds of Manitobans using AI, but a lot aren’t happy about it, survey reveals

Conrad Sweatman 4 minute read Tuesday, Mar. 10, 2026

Manitobans admit they rely on artificial intelligence for daily activities, but are troubled by the emerging technology’s impact on the environment, job security and beyond.

Read
Tuesday, Mar. 10, 2026
No Subscription Required

‘Uncover what’s really going on’: UFO researcher in Manitoba supports AI tracking

Brittany Hobson, The Canadian Press 3 minute read Preview
No Subscription Required

‘Uncover what’s really going on’: UFO researcher in Manitoba supports AI tracking

Brittany Hobson, The Canadian Press 3 minute read Friday, Apr. 24, 2026

WINNIPEG - Artificial intelligence is going to make it easier to spot whether a bird, a plane or an otherworldly creature is in the sky, as Canadians continue to report sightings of unidentified flying objects, says Canada's top UFO expert.

Chris Rutkowski has spent decades researching the phenomenon and is part of Ufology Research, a Manitoba-based organization that tracks UFO sightings in Canada and publishes an annual report.

The group's 2025 analysis, released Monday, includes data taken from observation stations set up by passionate UFO enthusiasts across the country.

"They're gathering scientific data above and beyond just the average person seeing something in the night sky. This is an attempt to quantify UFO sightings," said Rutkowski.

Read
Friday, Apr. 24, 2026

Mother of wounded Maya Gebala sues OpenAI over mass shooting in Tumbler Ridge, B.C.

Ashley Joannou, The Canadian Press 4 minute read Preview

Mother of wounded Maya Gebala sues OpenAI over mass shooting in Tumbler Ridge, B.C.

Ashley Joannou, The Canadian Press 4 minute read Tuesday, Mar. 10, 2026

VANCOUVER - OpenAI's artificial intelligence chatbot acted as the "collaborator, trusted confidant, friend and ally" of the shooter in the Tumbler Ridge, B.C., mass killings, according to a lawsuit by the mother of a girl critically wounded in the attack.

Cia Edmonds, whose 12-year-old daughter Maya Gebala was shot three times, launched the civil court lawsuit on Monday against the American firm, saying its ChatGPT bot provided "information, guidance and assistance" to carry out such an attack.

Edmonds alleges that OpenAI had “specific knowledge of the shooter utilizing ChatGPT to plan a mass casualty event like the Tumbler Ridge mass shooting.”

OpenAI came forward to police after 18-year-old Jesse Van Rootselaar killed eight people and then herself on Feb. 10. The firm said the killer’s ChatGPT account had been shut down last June, but added that she got around the ban by having a second account.

Read
Tuesday, Mar. 10, 2026
No Subscription Required

AI company Anthropic sues Trump administration seeking to undo ‘supply chain risk’ designation

Matt O'brien, The Associated Press 6 minute read Preview
No Subscription Required

AI company Anthropic sues Trump administration seeking to undo ‘supply chain risk’ designation

Matt O'brien, The Associated Press 6 minute read Friday, Apr. 24, 2026

Artificial intelligence company Anthropic is suing to stop the Trump administration from enforcing what it calls an “unlawful campaign of retaliation” over its refusal to allow unrestricted military use of its technology.

Anthropic asked federal courts on Monday to reverse the Pentagon’s decision last week to designate the artificial intelligence company a “ supply chain risk.” The company also seeks to undo President Donald Trump's order directing federal employees to stop using its AI chatbot Claude.

The legal challenge intensifies an unusually public dispute over how AI can be used in warfare and mass surveillance — one that has also dragged in Anthropic's tech industry rivals, particularly ChatGPT maker OpenAI, which made its own deal to work with the Pentagon just hours after the government punished Anthropic for its stance.

Anthropic filed two separate lawsuits Monday, one in California federal court and another in the federal appeals court in Washington, D.C., each challenging different aspects of the government's actions against the San Francisco-based company.

Read
Friday, Apr. 24, 2026
No Subscription Required

It takes a village to raise AI responsibly

David Nutbean 5 minute read Saturday, Mar. 7, 2026

Anthropic, maker of the popular Claude artificial intelligence model, has been facing heat from the U.S. government over the ethics of military AI. Due to its safety-first approach, its AI was considered the best and was approved for use on classified military networks. It signed a lucrative contract with the Pentagon and was integrated into military systems. Sounds ominous, for sure.

But the contract specified that the AI could not be used for fully autonomous weapons systems that can kill targets without involving human judgment, and for mass domestic surveillance of Americans. The Pentagon fought back against these restrictions, even though it signed the contract as such, insisting that the AI could be used for “all lawful purposes” and quickly sought to punish Anthropic for not capitulating to its demands.

Anthropic stood by its guardrails, both on principle and contract, standing up against the dangerous use of AI, risking the loss of government contracts and punishment from the autocratic regime. In solidarity, Sam Altman from OpenAI, Google’s AI division (Gemini AI) and others have supported the stand that these guardrails are necessary in a safe and democratic society. It is good news that there are red lines that AI should not cross and that the companies themselves are standing up against them.

But what struck me about this battle was a statement from an Anthropic executive in response to the Pentagon’s demands which read: “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” This defence is a clear definition of the limits of their AI model based on a deep understanding of its abilities as the creator of their technology. This becomes apparent when you look at how their model was developed.

No Subscription Required

Pentagon’s chief tech officer says he clashed with AI company Anthropic over autonomous warfare

Matt O'brien, The Associated Press 5 minute read Preview
No Subscription Required

Pentagon’s chief tech officer says he clashed with AI company Anthropic over autonomous warfare

Matt O'brien, The Associated Press 5 minute read Friday, Apr. 24, 2026

A top Pentagon official said Anthropic's dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in President Donald Trump's future Golden Dome missile defense program, which aims to put U.S. weapons in space.

U.S. Defense Undersecretary Emil Michael, the Pentagon's chief technology officer, said he came to view the AI company's ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the U.S. military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines to compete with rivals like China that could do the same.

“I need a reliable, steady partner that gives me something, that’ll work with me on autonomous, because someday it’ll be real and we’re starting to see earlier versions of that," Michael said in a podcast aired Friday. "I need someone who’s not going to wig out in the middle.”

The comments came after the Pentagon formally designated San Francisco-based Anthropic a supply chain risk, cutting off its defense work using a rule designed to prevent foreign adversaries from harming national security systems.

Read
Friday, Apr. 24, 2026
No Subscription Required

Eby says OpenAI’s Altman will apologize to Tumbler Ridge, B.C., in wake of shootings

Wolfgang Depner, The Canadian Press 4 minute read Preview
No Subscription Required

Eby says OpenAI’s Altman will apologize to Tumbler Ridge, B.C., in wake of shootings

Wolfgang Depner, The Canadian Press 4 minute read Friday, Apr. 24, 2026

VICTORIA - British Columbia Premier David Eby said OpenAI CEO Sam Altman has agreed to apologize to the people of Tumbler Ridge after the mass shooting by a user of the firm's technology, whose worrisome online behaviour wasn't flagged to police by the company.

"Everybody on the call recognized that an apology is nowhere near sufficient, but also that is completely necessary," Eby said of his conversation with Altman on Thursday.

OpenAI will also work with the province to come up with recommendations for federal regulatory standards on artificial intelligence and reporting of problematic interactions with its users, Eby said.

The premier said after the virtual meeting with Altman that OpenAI will work on the apology with the mayor of Tumbler Ridge where eight victims were shot dead on Feb. 10 by Jesse Van Rootselaar.

Read
Friday, Apr. 24, 2026
No Subscription Required

OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister

Wolfgang Depner, The Canadian Press 3 minute read Preview
No Subscription Required

OpenAI agrees to strengthen safeguards following B.C. mass shooting: minister

Wolfgang Depner, The Canadian Press 3 minute read Friday, Apr. 24, 2026

Federal Artificial Intelligence Minister Evan Solomon says the CEO of OpenAI has agreed to take several actions to bolster safety, including providing a report outlining the new systems the firm is developing to identify high-risk offenders and policy violators.

A statement from Solomon following his meeting Wednesday with Sam Altman says the minister will also ask the Canadian AI Safety Institute to examine the company's model and provide expert technical advice to his office.

The meeting follows the revelation that OpenAI banned the mass shooter in Tumbler Ridge, B.C., from using its ChatGPT chatbot last June due to worrisome interactions but did not alert law enforcement before the killings last month.

OpenAI has said new protocols would have resulted in Jesse Van Rootselaar's interactions being flagged to police, but Solomon says the tragedy "demands answers and stronger safeguards when powerful AI technologies are involved."

Read
Friday, Apr. 24, 2026
No Subscription Required

Province asks public to weigh in on rules for AI

Free Press staff 2 minute read Preview
No Subscription Required

Province asks public to weigh in on rules for AI

Free Press staff 2 minute read Wednesday, Mar. 4, 2026

The Manitoba government may consider setting age limits on using artificial intelligence or require private sector users to ask for consent before accessing residents’ data.

The province is launching a series of public consultations to explore changing the province’s data privacy laws so residents have enforceable rights, Innovation and New Technology Minister Mike Moroz said Wednesday in a news release.

The consultations will also look to establish clear rules for responsible AI use, particularly when the systems are designed to “make, recommend or influence decisions that affect a person’s rights, opportunities, benefits or access to essential services,” the release said.

The measures aim to address risks such as identity theft, deepfakes, child-targeted manipulation, biased algorithms and misuse of personal data in public and private systems.

Read
Wednesday, Mar. 4, 2026