Parties head into election campaign with no rules and little transparency on AI use
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$1 per week for 24 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.75/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Winnipeg Free Press access to your Brandon Sun subscription for only
$1 for the first 4 weeks*
*$1 will be added to your next bill. After your 4 weeks access is complete your rate will increase by $0.00 a X percent off the regular rate.
Read unlimited articles for free today:
or
Already have an account? Log in here »
OTTAWA — Despite years of warnings about the risk of artificial intelligence influencing elections, Canada is now in the midst of a federal election campaign with no specific rules or guardrails regarding the technology.
The 2025 general election campaign is the first in Canada since generative AI systems like ChatGPT became part of daily life. Critics are asking questions about the role such systems might play in crafting some of the campaign content Canadians will see over the coming weeks.
But with no specific rules or even voluntary guidelines in place — and with political parties themselves offering little information about their possible use of AI-generated material — those questions are hard to answer.
Earlier this month, Conservative Leader Pierre Poilievre posted two French-language videos with elements — including backgrounds — that appeared as if they could have been AI-generated.
The party didn’t respond to questions about whether or how AI had been used in creating the videos. Neither the Liberals nor the NDP replied when The Canadian Press asked them whether they had any issues with the videos.
Elizabeth Dubois, an associate professor and university research chair in politics, communication and technology at the University of Ottawa, said it’s hard to say if or to what extent AI might have been used in the videos “because image and audio manipulation tools are extremely common.”
Some of that manipulation — like the automatic filters that are standard on smartphones — are benign, she said, while “others can lead to confusion or deception, which is when AI use can be more problematic.”
Part of the problem is that there are no “consistently useful” tools to identify AI-generated content, she said.
Dubois said that while the tools to create AI content are “increasingly available and increasingly convincing,” the safeguards in those systems are limited.
“Generally speaking, simply using AI tools is not a problem from a democratic integrity perspective, but a lack of transparency around how these tools are integrated into campaigns is where we start seeing real risks,” she said.
At a press conference in Ottawa on Monday, Chief Electoral Officer Stephane Perrault said deepfakes are a “serious concern.”
“Synthetic materials have been used in elections around the world over the last year to provide misleading content,” he said, adding people tend to be too confident in their ability to detect deepfakes.
In February, The Canadian Press reached out to the Liberals, Conservatives and NDP to ask them to share their rules on the use of filters or AI on candidate photos or other campaign materials.
The Conservative party didn’t respond. The Liberal party declined to comment on specific tools but said it “fully complies with all Elections Canada rules and regulations for political engagement.”
The NDP said it doesn’t use AI to produce campaign material.
“We provide clear specifications about head shots and other campaign materials so, aside from touch-ups like blurring out a piece of lint or a fly-away hair, Canadians will see our candidates as they are,” NDP national director Lucy Watson said in a statement.
None of the parties responded in mid-March to followup questions asking for more information about their rules on AI use. They also didn’t answer when asked whether there should be any rules or guidelines on the use of AI-generated materials in election campaigns.
A spokesperson for Elections Canada said there is nothing in the federal election law that specifically refers to artificial intelligence.
The spokesperson said the law “outlines specific circumstances under which the distribution, transmission or publication of material that falsely indicates to have been made by or under the authority of the Chief Electoral Officer, a returning officer, a political party, a candidate or a prospective candidate is prohibited.”
Perrault has called for the law to be updated to, among other things, require that AI-generated electoral content is clearly identified. That would mean ensuring all images, audio, video and texts that have been “generated or manipulated by AI include a clear transparency marker,” he wrote in a list of recommendations.
The election reform bill C-65, introduced by the Liberal government in 2024, would have changed the law to address the threat posed by AI and deepfakes. But that law died on the order paper when the House of Commons was prorogued.
Florian Martin-Bariteau, research chair in technology and society at the University of Ottawa, said that bill wasn’t perfect but it was a “good step” toward addressing AI political interference and deepfakes.
Martin-Bariteau was one of the authors of an international report on AI and elections that was released in February.
It noted that in Brazil’s October 2024 election, local politicians and their supporters used AI dozens of times “to produce synthetic images, audio content, or videos to boost their candidacies or undermine their opponents” — material which included deepfake pornography involving five female candidates.
The report called on countries to update their election laws to account for artificial intelligence. In an interview a few weeks before the election began, Martin-Bariteau said that while it wouldn’t be possible to pass a law before the current campaign, the political parties could at least agree on a code of conduct for AI use.
“In the EU at the last election, all of the parties from extreme left to extreme right, they all agreed on a set of rules,” he said. They included rules on clear labelling and not using AI to produce misleading content.
But when asked about it by The Canadian Press, none of the three major federal parties expressed interest in signing up for such a code of conduct.
The report Martin-Bariteau co-authored noted that during the 2024 U.S. presidential election, experts tested major AI models on their ability to deliver accurate information.
“These tests showed discrepancies with respect to information in different languages, and between AI companies’ stated commitments to accurate electoral information and the performance of their models,” the report said.
Martin-Bariteau said that chatbots can be useful in providing voters with important information about things like how to vote. But when they aren’t well set up, he warned, they can hallucinate answers and deliver misleading information.
— The Canadian Press