The always and never of being watched

Advertisement

Advertise with us

I saw a snippet of an interview with an AI cheerleader — there are so many just now that I don’t even remember who it was, and I haven’t been able to track it down since — arguing that crime could be stopped if, like on-duty police, we were all required to wear personal cameras to track our every daily step, sharing that data constantly with central databases. People would “behave” because they would know they were being watched, and would be caught if they misbehaved.

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$1 per week for 24 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.75/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

Opinion

I saw a snippet of an interview with an AI cheerleader — there are so many just now that I don’t even remember who it was, and I haven’t been able to track it down since — arguing that crime could be stopped if, like on-duty police, we were all required to wear personal cameras to track our every daily step, sharing that data constantly with central databases. People would “behave” because they would know they were being watched, and would be caught if they misbehaved.

The same “expert” argued that the world would be a better place if every scrap of personal information, from health records at places like Britain’s National Health Service to credit data on down, were pooled into one great database that AI could then sample and work with — that the more data AI had access to, the better the results it could provide.

And the only thing I could think of, and that haunted me for days, was that we would all be sentenced to live in a modern version of the panopticon.

But worse.

One where the walls and surveillance were invisible, but constant. And, unfortunately, not tamper-proof.

The panopticon was the invention of Jeremy Bentham in 1785, the idea being to build a prison that could house the largest number of prisoners with the smallest number of guards. A central tower would hold the guards: through small windows, the guards could observe prisoners, but the prisoners could not see if the guards were watching them at any particular time.

The prisoners’ cells would ring the central tower, with one entire side of the cell open for the guards to be able to view the entirety of any cell at any time they chose.

Not knowing when and if you were being watched, Bentham argued, meant that prisoners would constantly be on their best behaviour.

Now, imagine having AI as your panopticon guard — not only a guard that could be watching you at any time, without your knowledge, but, in fact, a guard that would be watching you at any time — the only small saving grace being that your “misbehaviour” might not be large enough at any particular time to trigger punishment.

Your only safety? Complying or just plain flying under the radar. Perhaps, on the face of it, a safer world, but at what cost?

But even that may not be enough, in the world of AI.

The other side of this AI-opticon is that seeing is believing, even though what you see may not be real. AI gives clear-cut and convincing answers, and posits them definitively, even when they’re wrong.

A lot depends on what’s going in, and where information is coming from — it’s already seen as a single-source argument-ending tool, though there are spectacular blind spots. Lest you think I’m simply a Luddite, AI does have its place — but AI as judge and jury is a spectacularly bad idea.

Garbage in, garbage out, as the saying goes. But the garbage — from bad actors with good AI tools — is improving.

First of all, deepfakes are getting better and better. Right now, even badly falsified video and images are enough to convince people someone has done something wrong — and as those fakes improve, it will get harder to prove them false. Harder still, if AI has access to huge pools of data — for example, acres of video on anyone from a political candidate on down, so an AI creator will be able to see every nuance of how a politician’s lips and mouths make words, and even what words they are most likely to use.

Then, there’s the fact we already know that those who control the input data for AI models — and even those who have their fingers on the algorithms of what we see and don’t see in places like social media sites, choking off the positions they don’t agree with and boosting the profile of those they do agree with — have the ability to skew “facts.”

When Elon Musk’s GrokAI answered questions about whether more recent U.S. domestic terrorists were right-wing or left-wing, it said, unequivocally, right-wingers.

Musk promised a tweak. Grok’s less definite now.

People are already trusting AI to give them straight goods, forgetting that, at heart, it’s only a tool, and as long as it’s a tool, what it delivers depends entirely on the hand controlling it.

Supporters of a full unleashing of AI suggest handing over all data to AI will block tampering by its handlers. To that, all I can say is their naïveté is remarkable. It will just make better tools to ensure the tampering is all the more seamless.

I, for one, do not relish a future living under the heavy thumb of our AI masters — or, more accurately, under the thumbs of the masters of AI.

And wandering among the trees, as far out of reach of the growing AI dependence as possible, seems a better and better option.

Russell Wangersky is the Comment Editor at the Free Press. He can be reached at russell.wangersky@freepress.mb.ca

Russell Wangersky

Russell Wangersky
Perspectives editor

Russell Wangersky is Perspectives Editor for the Winnipeg Free Press, and also writes editorials and columns. He worked at newspapers in Newfoundland and Labrador, Ontario and Saskatchewan before joining the Free Press in 2023. A seven-time National Newspaper Award finalist for opinion writing, he’s also penned eight books. Read more about Russell.

Russell oversees the team that publishes editorials, opinions and analysis — part of the Free Press‘s tradition, since 1872, of producing reliable independent journalism. Read more about Free Press’s history and mandate, and learn how our newsroom operates.

Our newsroom depends on a growing audience of readers to power our journalism. If you are not a paid reader, please consider becoming a subscriber.

Our newsroom depends on its audience of readers to power our journalism. Thank you for your support.

History

Updated on Saturday, October 11, 2025 11:45 AM CDT: Corrects typo

Report Error Submit a Tip

Analysis

LOAD MORE