As AI simulacra get ‘better,’ life sure to get worse

Advertisement

Advertise with us

Earlier this week, Zelda Williams took to Instagram with a plea that swelled into a searing release of justified anger. Her request: for fans — and trolls — to stop sharing increasingly lifelike AI-generated videos of her late father, the legendary comedian Robin Williams.

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$1 per week for 24 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.75/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

Opinion

Earlier this week, Zelda Williams took to Instagram with a plea that swelled into a searing release of justified anger. Her request: for fans — and trolls — to stop sharing increasingly lifelike AI-generated videos of her late father, the legendary comedian Robin Williams.

“Please, just stop sending me AI videos of Dad,” she wrote. “Stop believing I wanna see it or that I’ll understand… please, if you’ve got any decency, just stop doing this to him and to me, to everyone even, full stop. It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.

“To watch the legacies of real people be condensed down to ‘this vaguely looks and sounds like them so that’s enough,’ just so other people can churn out horrible TikTok slop puppeteering them is maddening,” she added. “You’re not making art, you’re making disgusting, over-processed hot dogs out of the lives of human beings.”

Nor is Williams the only child of a cultural icon to be deluged with this problem. On the social media site formerly known as Twitter, Bernice King, youngest daughter of Martin Luther King Jr., co-signed Williams’ plea with a seven-word declaration.

“I concur concerning my father,” King wrote simply. “Please stop.”

Amidst the endless swamp of my social media feed, their cries for sanity stood out. I recognized the pain in them, as a fellow grieving daughter, though of course cannot precisely relate: my father was not a celebrity, and so I have, thankfully, not been inundated with fans manufacturing his image.

Do any of those fans think they’re helping? Is it a misguided desire to assuage? It’s true that, in the six years since my father died, there hasn’t been a day I didn’t ache to see him again, to hold a new conversation, to hear him talk about the events of the world since he left.

Yet one of the most crucial parts of grief, and of healing, is to find peace with the fact there is a bracket around life that is inviolable. That there will be no new memories, no new conversations; that the face of those we’ve loved and lost, or their voice, existed once and never will again.

So to be flooded with images of a departed loved one doing things they never did, or saying things they never said — this is not a kindness, but a torment. If a grieving relative isn’t sufficiently far in their healing, it could even be dangerous, a siren song to slip into a mental world where the unbreakable wall of death is far more porous.

A world where not even death is finite; where what existed is fungible with what didn’t; where we cannot be sure what is a memory we held, and one fabricated for us to hold. What could happen to those who fall too deep into that world? And for that matter, what will happen to all of us, as we are thrust into that world without our consent?

Williams’ plea comes at a critical moment. The pace at which AI video generation is improving should terrify us. It’s hard to believe, but it’s been only a couple of years since early text-to-video models became popular, and already the entire internet is awash in fake videos that have become increasingly difficult to detect.

Consider that, just two years ago, humans in AI-generated videos still frequently appeared with impossible anatomy — most commonly, an odd number of breasts or variable number of fingers. Those glaring signs of fakery are rare in the new models. Now, figures in videos can move, speak, and even sing with disturbing fidelity.

The potential for abuse is limitless. On Sept. 30, when OpenAI unveiled its new Sora 2 video-generation platform, one developer showed off a video he’d had it make with a quick text prompt: a mimicry of security camera footage showing OpenAI CEO Sam Altman shoplifting from Target, and being caught by a security guard.

There were a few “tells” that the video was made by AI — a box on the shelf moved without the “Altman” figure touching it — but the overall effect was startlingly realistic. The video quality was appropriately grainy for its supposed source; the human figures moved and reacted to each other in largely realistic ways.

What was most disturbing, however, was that developers proudly showed it off, apparently heedless to the obvious problem: it’s not good to build a world in which anyone, without any special-effects training or resources, can whip up a video of a real person committing a crime that will fool most casual viewers.

Now, anyone with a grudge can, with a few clicks of their keyboard, create images that could get people fired from jobs and estranged from their spouses — or change the course of local and global events. Even if we grow more savvy to the risk, the effect is corrosive: every day, we can trust what we see less and less.

We are being shoved over the threshold of that world now, with little protection.

The AI industry has shown little serious commitment to reducing these harms. As one measure, videos generated by Sora 2 emerge with a watermark, a self-promoting flash of the Sora 2 logo. Less than two weeks since its release, there are already many easily-accessible tools to remove those watermarks, leaving no sign the video was AI-generated.

If the industry won’t check itself enough, global governments must. Canada has explored legislation to regulate AI, though it was stalled in parliament earlier this year. We need to move faster, and more decisively. The capacity for irreparable damage to our lives, institutions and social fabric is far too vast.

At a minimum, laws must ensure that AI cannot replicate the images or voices of any real people without their consent, and that every video generated by AI has some sort of unalterable, immediately visible label declaring it as such; and there must be a fast and sure way to enforce this. That is probably not enough to contain the danger, but it’s a start.

Ultimately, we have to get to a place where robust regulation ensures that boundaries of what is real and what isn’t are not further eroded. That’s especially true when it comes to the most essential part of ourselves, the one thing that most defines us in the world and can most damage us when stolen: our own image.

Many will disagree. When Zelda Williams pleaded with the world to stop sending her videos that amounted to “disgusting over-processed hot dogs out of the lives of human beings” like her father, at least some AI fans appeared to misunderstand the source of her pain, and her anger.

“Don’t worry,” one person wrote, in a comment. “It will get more realistic soon.”

That’s precisely the problem, of course. But as I considered the comment further, I realized it wasn’t actually a misguided attempt to assure her that things would get better. It was a taunt — or even, given the underlying belief that the essential likeness of a person is free to take and warp and control, a threat.

melissa.martin@freepress.mb.ca

Melissa Martin

Melissa Martin
Reporter-at-large

Melissa Martin reports and opines for the Winnipeg Free Press.

Every piece of reporting Melissa produces is reviewed by an editing team before it is posted online or published in print — part of the Free Press‘s tradition, since 1872, of producing reliable independent journalism. Read more about Free Press’s history and mandate, and learn how our newsroom operates.

Our newsroom depends on a growing audience of readers to power our journalism. If you are not a paid reader, please consider becoming a subscriber.

Our newsroom depends on its audience of readers to power our journalism. Thank you for your support.

Report Error Submit a Tip

Columnists

LOAD MORE