Pulling the levers behind artificial intelligence
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$1 per week for 24 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.95 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.99/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19.95 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Free Press access to your Brandon Sun subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Hey there, time traveller!
This article was published 21/06/2025 (250 days ago), so information in it may no longer be current.
Sometimes, artificial intelligence looks downright stupid. Other times, it just looks dangerous.
A rather famous recent AI mistake/mashup involved both Meta AI and Google being asked about the time zone in Cape Breton, N.S., and both telling users that the area was 12 minutes ahead of Atlantic Standard Time and 18 minutes behind Newfoundland time. It isn’t. The AI systems had merely sampled all they could find on the topic of Cape Breton and time zones — a satirical piece on the comedy site The Beaverton — and presented it as fact.
As the old saying about computers goes, garbage in, garbage out
FILE
Elon Musk
AI is getting better, especially in areas where it can sample a large variety of sources of information, but there are still cases where AI has simply invented sources. For that reason, there’s a lot at stake if AI answers are accepted at face value, and if people aren’t willing to go further to verify the sources of material the AI devices are using.
Because many are doing just that, taking an AI one-and-done approach to “proof.”
So much so that, on social media sites like X (formerly Twitter), users regularly go to that site’s Grok AI to try to establish whether things cited as fact on the site are actually true, or whether images posted by other users are accurate. It’s certainly better than just accepting everything you see on social media, but it’s become such an accepted form of proof that users happily post Grok’s answers, and even say that Grok was their one and only source.
(In unintended hilarity, Grok’s owner, Elon Musk was labelled a “top misinformation spreader” by Grok itself, a position Grok seems to have mysteriously mellowed on since then, arguing that Musk is both a spreader of misinformation and a target of those who don’t like Musk’s self-claimed free speech absolutism.)
But think about the following situation.
Elon Musk posted that “the far left is murderously violent” after two Democrat politicians and members of their families were shot in Minnesota and the alleged shooter was misidentified at first as a Democrat supporter.
When X users, responding to the post, asked Grok, “Who commits more domestic terrorism? The ‘far-left’ or the ‘far-right?”, it responded, “Data consistently shows far-right groups commit more domestic terrorism in the U.S. than far-left groups, both in frequency and lethality.”
Musk then replied, “Major fail, as this is objectively false. Grok is parroting legacy media. Working on it.”
Working on what? Grok cited sources from the U.S. Government Accountability Office to the Department of Homeland Security to the FBI to the University of Maryland, all saying that far-right terrorism easily outstrips far-left terrorism. Only two of 15 sources were even from the media — and they were reporting on other studies.
Clearly, Musk was letting his own beliefs dictate what was objectively true or objectively false — which he’s welcome to do, because that’s how personal opinions tend to work. You believe things to be true if you agree with them, and doubt their veracity if you don’t.
But “working on it” suggests a new — and real — concern about depending on AI to determine “truth.” Because the machine is only as accurate as its programmer wants it to be.
And that leaves the possibility of a thumb on the scales.
In the next few weeks, a Grok “tweak” may well change its position on just who leads the way in domestic terrorism in the U.S.
If, at the end of the day, AI is only as accurate as the rich person standing behind the machine wants it to be, we’re in deep, deep trouble.