AI tech needs stronger regulation

Advertisement

Advertise with us

For as long as children have been online, there has been danger, and teaching them to avoid it has had to quickly become part of every parent’s responsibilities.

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$1 per week for 24 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.75/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Winnipeg Free Press access to your Brandon Sun subscription for only

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*$1 will be added to your next bill. After your 4 weeks access is complete your rate will increase by $0.00 a X percent off the regular rate.

Opinion

Hey there, time traveller!
This article was published 20/12/2023 (653 days ago), so information in it may no longer be current.

For as long as children have been online, there has been danger, and teaching them to avoid it has had to quickly become part of every parent’s responsibilities.

But a recent incident in Winnipeg has proven that the online world has evolved into something more malicious than we previously imagined possible.

On Dec. 14, the Free Press reported that Louis Riel School Division officials had learned artificial intelligence software had been used to create nude images of underage students, with the incident centred on Collège Béliveau in Winnipeg’s Windsor Park neighbourhood.

The Associated Press files
                                College Beliveau is grappling with an incident in which artificial intelligence software was used to create fake, sexual images of underage girls.

The Associated Press files

College Beliveau is grappling with an incident in which artificial intelligence software was used to create fake, sexual images of underage girls.

The discovery of the images has resulted, rightly, in shock and dismay within the community.

It’s a beast of a different nature, even for the online world, in which many young people spend a great deal of time. First, there was the need to instruct children not to provide personal information over the internet, or agree to meet online acquaintances in real life. Then, there was the need to teach them not to share sexual images of themselves, as they easily proliferate online and become impossible to destroy or reclaim.

But this is different. Using AI, the perpetrators of this offence did not need anyone to show up in person, or post an explicit photo of themselves. All they needed was access to a perfectly innocuous photo — of the type which festoon most people’s social media profiles — and with AI’s help they turned it into a crime. No need to convince a victim of anything at all.

Artificial intelligence software has been in use online for some time, since long before ChatGPT and programs like it became the fixation of the tech-savvy everywhere. In many cases, it is used for innocent purposes. The question becomes, then, how can AI technology be contained so that it cannot be used in for poisonous purposes?

Government often fails to keep pace with technological advancement. As this situation in a Winnipeg school demonstrates, it’s time to change that. A rapid response is required.

Is it possible to get the AI toothpaste back in the tube? Not likely. The software is everywhere now. However, it is well within the government’s ability to legislate in such a way that constrains how AI can be used, who has access to it, and what the AI itself is programmed to do.

Computer software already exists which can detect whether an image is likely to be pornographic in nature. Content moderation is a concept as the internet itself. A series of checks and balances, in which AI software flags suspicious content and requires a human moderator’s approval before it is released to the image’s creator, is just one idea.

Regulations could also be set to make it impossible to create an AI-generated image based on a pre-existing, uploaded photograph, requiring instead written prompts only for the creation of an image.

What exactly can be done will be something for government and software engineers to sort out. One thing is certain, however: a situation such as the one at Collège Béliveau cannot be accepted as simply one of the possibilities of existing in an online space.

Children, and the rest of us, deserve some assurances that we can be protected from the predations of those who would seek to abuse or humiliate others using little more than a school picture and a publicly available program.

AI is often exalted in the tech world for its supposedly boundless potential. But as has been made abundantly clear by this deeply disturbing incident, it must be limited in some way.

Otherwise, AI’s potential to cause harm will be as boundless as human depravity allows.

Report Error Submit a Tip

Editorials

LOAD MORE