Take cautious approach to AI
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$1 per week for 24 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.75/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Free Press access to your Brandon Sun subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Hey there, time traveller!
This article was published 08/07/2023 (835 days ago), so information in it may no longer be current.
This editorial was written by a human being.
That might seem to be an odd and unnecessary declaration — that the text in a story in this newspaper is the work of an actual person — but the simple fact of the matter is that it has become a very relevant statement.
Recent developments in artificial intelligence (AI) have given rise to an environment in which it’s entirely possible something you’re reading is completely the creation of AI technology such as ChatGPT, a language processing tool that creates human-like conversations and documents based on prompts from its users.

Jenny Kane / AP Files
Knowing who or what you’re dealing with: human or computer?
The arrival of ChatGPT — created by OpenAI, a highly controversial artificial-intelligence and research company that has been accused of illegally “scraping” the personal data of hundreds of millions of internet users, in violation of privacy, intellectual property and anti-hacking laws — has prompted officials in many professional and academic disciplines to reconsider the risks and benefits of AI and to hurriedly create AI use and disclosure policies aimed at ensuring standards of accuracy and legality are protected.
One such policy has been established in Manitoba’s legal community, with the issuance late last month by Chief Justice Glenn Joyal of a practice direction requiring attorneys and self-represented litigants to disclose the use of AI in the preparation of submissions to Court of King’s Bench.
All submissions to the court must now indicate whether AI was used in their preparation and, if so, how the nascent technologies were applied.
“It’s a modest first step, which is using a tone that’s both cautionary and anticipatory,” Mr. Joyal explained. “We don’t know how (AI) is going to be used, and we don’t know how it’s going to evolve, but we have to be cautious with respect to it.”
The order arises from concerns regarding the accuracy and reliability of information generated by AI programs; underscoring the concerns is a recent incident in Manhattan’s federal court in which lawyers cited ChatGPT as the culprit after fictitious case law was found to have been included in a filing.
It’s a prudent preliminary step by Manitoba’s court system, one that is no doubt reflective of discussions taking place in countless workplace settings, including newspapers, as AI technology quickly evolves and its influence becomes more widely felt.
While many have touted the likes of ChatGPT as major breakthroughs that will advance human communication and streamline once-cumbersome creative processes, critics warn that those who have raced to embrace such AI advances do so at their own peril.
A recent article in The Atlantic titled ChatGPT is Dumber Than You Think warns the AI function “lacks the ability to truly understand the complexity of human language and conversation. It is simply trained to generate words based on a given input, but it does not have the ability to truly comprehend the meaning behind those words. This means that any responses it generates are likely to be shallow and lacking in depth and insight.”
The kicker is that the article’s author didn’t actually write the paragraph; ChatGPT did, in response to a request for a critique of ChatGPT’s popularity crafted in his writing style.
The author’s conclusion: “Treat it like a toy, not a tool.”
That’s the sort of healthy skepticism and continuing caution that must be applied to the use of all the AI options that are insinuating themselves into our personal and professional lives. Like content found in Wikipedia and so much more of what exists online, the accuracy of what AI delivers must always be scrutinized and double- and triple-checked before being considered fit for public consumption.
Simply put, it’s the genuinely intelligent thing to do.