WEATHER ALERT

AI chatbot drags X deeper into gutter Politicians, victim advocates struggle to hold social media giants accountable for proliferation of misogynistic and child sexual abuse imagery

Concern has intensified in recent weeks over the role of social-networking app X in public and political life, as the platform’s artificial intelligence chatbot has created millions of sexual “deepfakes.”

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$1 per week for 24 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.95 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.99/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19.95 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

Concern has intensified in recent weeks over the role of social-networking app X in public and political life, as the platform’s artificial intelligence chatbot has created millions of sexual “deepfakes.”

The chatbot, called Grok, has facilitated a raft of disturbing applications: a female world leader is artificially placed in bikini alongside her clothed counterparts; the corpse of the victim of the recent ICE shooting in Minneapolis is grafted into a swimsuit; a “father figure” is placed in a sexually-suggestive position next to a young woman or teen, all part of a wider trend where users have employed Grok to digitally remove the clothing of real women and children.

The images proliferated quickly, starting en masse in late December. The New York Times reported Thursday that Grok created and shared at least 1.8 million sexualized images of women over just nine days starting Dec. 31.

Earlier this month, Canada’s privacy commissioner launched an investigation into Elon Musk’s xAI, the company behind the chatbot, and expanded its ongoing inquiry of X Corp, to determine if they obtained “valid consent” from individuals to use their personal information to create deepfakes.

Internet Watch Foundation, a U.K.-based child-protection organization, has said tools like Grok risk bringing sexual AI imagery of children “into the mainstream,” and has recently released data showing a 26,362 per cent rise in the number of AI videos of child sexual abuse — often involving real victims — its analysts discovered between 2024 and 2025.

X, formerly Twitter, initially responded to the chatbot concerns with joking from its billionaire owner Musk and moved to limit Grok’s “nudify” function to its paid subscribers. Then, on Jan. 14, the company said it will not allow any users to edit real people into revealing clothing on the X platform, though this function still exists on Grok’s standalone website and app. (xAI responded to a request for comment with one sentence: “Legacy Media Lies.”)

Despite this latest controversy, X remains a place of connection and information-sharing, with hundreds of millions of users around the world. Many of Manitoba’s federal and provincial politicians use the platform to speak to constituents. All but two of Manitoba’s 14 MPs have been active on the platform within the last month. MLAs are less prominent users overall — the majority do not have an account or have not posted in over a year.

Several experts told the Free Press that this moment underscores the need for politicians to implement federal regulation that spells out how tech giants must handle harmful material on their platforms, rather than to call for a ban of X or a boycott of the platform.

“I think this is a wake-up call for Canada,” said Lloyd Richardson, the director of technology with the Winnipeg-based Canadian Centre for Child Protection. “Let’s get on with it, let’s stop bantering around about the Online Harms Act. Let’s get pen to paper. Let’s get all political parties on the same page.”

While Richardson made clear Grok’s lack of guardrails is a significant problem — the centre has observed offenders discussing the use of the chatbot to create child sexual abuse material — he explained that Canada’s lack of regulation encourages lax behaviour from all platforms.

“The reality is we don’t have any legislation related to what online providers can and cannot do,” Richardson said. “Just because you make something — that content — illegal, doesn’t mean that you have a set of rules for how the platform is going to operate.”

The Online Harms Act,which would have created a Digital Safety Commission and forced social media companies to deal more proactively with harmful sexual material, including by removing such content within 24 hours, died on the order paper last year.

Mark Carney’s Liberal government has yet to introduce a similar bill. A spokesperson for the Department of Heritage declined to specify when, and if, they plan on introducing a new version of online harms legislation, saying only that the government “intends to act swiftly” to better protect Canadians on this issue.

However, the Globe and Mail reported earlier this week that a new version of the bill is expected to be introduced within months.

“Let’s get on with it, let’s stop bantering around about the Online Harms Act. Let’s get pen to paper. Let’s get all political parties on the same page.”

Currently, social media companies are required to report child sexual abuse material when they become aware of it to law enforcement and potentially face legal ramifications if they don’t remove it. Yet these legal sanctions are extremely rare.

There’s no regulation that determines how quickly a platform must act once notified, nor are they obligated to engage in any proactive detection, Richardson explained. In practice, this means that even if an image depicting child sexual abuse exists in hundreds of places on a single platform, the provider is only obligated to remove it from the specific page it was directed to.

“It turns into a sort of whack-a-mole game where you notify a company, ‘hey, you saw this image. Please remove it.’ They remove it. They’ve done legally what they need to do. But there’s no expectation of, like, if that shows up on your system again, well, why is that?” Richardson said.

There are technical tools available to prevent this from happening, Richardson said, including the centre’s own Project Arachnid.

“When you keep having to tell a company, ‘OK, use these particular search terms to find child sexual abuse material on your platform’ — it shouldn’t be up to a charity in Winnipeg, Manitoba, pointing out to you consistently that you’re hosting child sexual abuse material,” he said.

Sexual deepfakes are just one element of a much bigger web of online harms, where, reports of online sexual luring targeting Canadian children increased by 815 per cent between 2018 and 2022, and where, as of earlier this month, Project Arachnid has issued more than 141 million “takedown notices” globally to electronic service providers to remove child sexual abuse material or other related harmful content.

“It shouldn’t be up to a charity in Winnipeg, Manitoba, pointing out to you consistently that you’re hosting child sexual abuse material.”

In a recent statement, Canada’s Artificial Intelligence Minister, Evan Solomon, wrote that “deepfake sexual abuse is violence” and referenced his government’s plan to amend the Criminal Code to ensure deepfakes are included in Canada’s definition of an intimate image for the purposes of criminalizing their non-consensual distribution.

Rosel Kim, a senior staff lawyer at the Toronto-based Women’s Legal Education and Action Fund, who leads the organization’s work on technology-facilitated gender-based violence, pointed out that legal responses, while important, tend to be slow — and don’t prevent the material from being created in the first place.

“The longer it stays up, the more likely it will be shared and amplified,” Kim said.

And, as Kim pointed out, getting harmful material — whether child sexual abuse, non-consensually shared intimate images or sexualized deepfakes — removed from the web is a job that typically falls to survivors. In a 2024 report, survivors of child sexual abuse told the child-protection centre that tech companies slow-walked takedown requests, taking weeks to act, or ignored pleas altogether.

“The longer it stays up, the more likely it will be shared and amplified.”

Like the centre, LEAF is calling for the creation of a regulator to deal specifically with online gender-based violence: both to provide legal remedies and support to victims, including to assist them in getting content taken down quickly, but also to develop training and education.

Australia, which has an eSafety Commissioner, is a possible model, Kim noted. She also pointed to the need for a federal law allowing for the quick removal of harmful sexual material without a court order.

Kim said companies should be required to publicly detail their abuse-reporting mechanisms and submit to independent audits.

Similarly, for Manitoba Senator Marilou McPhedran, while criminal-law changes, such as Bill C-16, are important, a raft of other interventions are needed, including the implementation of online harms legislation, as countries like Australia and the U.K. have opted to do.

“I think there’s a real limit to what the Criminal Code of Canada can actually do in relation to the social media giants, and I mean, I think they’re monsters, and we can’t just hide from the monsters,” she said.

“As odious as it often is to even go on X, I think it remains a primary communication tool.”

But McPhedran, a longtime lawyer and women’s-rights advocate turned parliamentarian, isn’t shying away from her own X account. She explained that her council of youth advisers has spoken to her about the importance of sharing about her Senate work on social media.

“As odious as it often is to even go on X, I think it remains a primary communication tool,” McPhedran said. “As a parliamentarian, I’m looking at ways, first of all, to be on social media, be as open in communication and responsive as possible, and to call out, name and work against the exploitation.”

As far as solutions, McPhedran also pointed to the need for more media literacy education to help young people navigate online issues like deepfakes and disinformation.

For McPhedran, this online abuse raises the question of whose freedom of speech is being protected and whose is being silenced.

“These are assaults. And they are designed to shut women up and to shut women out, and that includes girls and young women, and the incentive (is) to make money, to make profit,” McPhedran said.

“It’s highly profitable misogyny.”

marsha.mcleod@freepress.mb.ca

Marsha McLeod

Marsha McLeod
Investigative reporter

Marsha is an investigative reporter. She joined the Free Press in 2023.

Our newsroom depends on a growing audience of readers to power our journalism. If you are not a paid reader, please consider becoming a subscriber.

Our newsroom depends on its audience of readers to power our journalism. Thank you for your support.

Report Error Submit a Tip