Feds working with social media sites to defeat bots
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$0 for the first 4 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*No charge for 4 weeks then price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.75/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Free Press access to your Brandon Sun subscription for only an additional
$1 for the first 4 weeks*
*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Hey there, time traveller!
This article was published 28/11/2017 (2903 days ago), so information in it may no longer be current.
Sci-fi visions of robots changing our society have nothing on the bots in our real world.
Political bots are a form of automated software that operates on social-media platforms including Facebook and Twitter with the intention of influencing political discourse, including the dynamics of election campaigns. So-called “sock puppets” are used to plant new ideas and then, through the use of automated accounts on social media, those ideas are diffused to promote consensus or conflict within society.
Bots can be influential, because today there nearly as many Canadian Facebook user as there are registered voters. Moreover, a recent study found that 51 per cent of Canadians receive news first from digital sources rather than traditional media. Therefore, a greater public understanding of the role of bots is needed to avoid manipulation of public opinion during elections.
Currently, many aspects of how bots work and how they impact — both directly and indirectly — the democratic process are not well understood.
With bots, the line between planned and deliberate human actions and automated technological processes can be blurred in practice. Based on the global reach of the internet, both domestic and offshore actors and organizations can use bots to flood social media with information and then see that information distributed within the broader media environment.
In the political domain, bots represent both an opportunity and a threat. An example of an opportunity would be a bot intended to promote transparency and accountability regarding industry and government actions or inactions. Another example would be bots used to communicate with supporters of a political party for the purposes of engagement and fundraising.
A threat could consist of attacks on a political party or a candidate based on negative misinformation or disinformation. The former involves unintended inaccuracy, whereas the latter is a deliberate attempt to mislead. Bots can cause chaos and controversy during election periods, and this can detract from public trust and confidence in the outcomes.
One particularly nefarious use of bots involves the posting of “dark ads” (so-called because the source and/or targets remain anonymous) on Facebook, targeted at selective segments of the voting population. Recent research suggests the use of “psychographic” profiles of individual voters generated from “like” clicks on Facebook allows organizations and individuals to target certain personality types within the voting population.
Not only does this have the potential for hidden mass persuasion, it can also contribute to individual voters residing in a “digital bubble” where they encounter only opinions similar to their own.
Segmentation of voters into categories of various kinds and customizing messages to trigger desired responses from them also contributes to greater polarization within society and makes consensus harder to achieve.
The U.S. presidential election of 2016 brought greater attention to the rising role of bots. The hacking of the Democratic National Committee’s email system and the selective release of Hillary Clinton’s email on Wikileaks led to Russian efforts to amplify and sensationalize the leaked information through a military agency that created a website called DCLeaks.
Messages were then issued under phony names to Facebook and to Twitter. In the same election contest, both the Clinton and the Trump campaigns used automated tweets to boost their popularity, but a higher percentage of the pro-Trump tweets originated from anonymous sources.
Canadians will remember the scandal during the 2011 federal election over the misdirection of voters to the wrong polling stations by means of robocalls. In that case, the CRTC was given authority to regulate the misuse of telecommunications to suppress voter turnout. Something similar could be attempted using bots.
Currently, under the Canada Elections Act, there are provisions that could potentially be used against harmful bots. For example, Elections Canada (EC) can deal with conduct that interferes with the election process and with false statements regarding the personal character or conduct of a candidate. Also, a bill currently before Parliament provides EC with the authority to educate the public about the electoral process. The effectiveness of these provisions to deal with the threats posed by bots is uncertain, but in all likelihood EC will need new regulatory tools.
The government of Canada is working with social-media companies, particularly Facebook, on initiatives intended to protect against inappropriate interference in the democratic process. As part of its Election Integrity Initiative, the government has also promised programs to enhance civic literacy about the impacts of the ongoing technological revolution. Political parties could also adopt a code of conduct that would commit them to not using bots to distort electoral competition
The main solutions to the nefarious use of bots, however, reside with the owners of social-media platforms. Until recently, Facebook and Twitter claimed they had policies to prevent abuse, but in practice they were happy to take money without much attention to the source and purpose of online advertising.
They still prefer self-regulation as the way to forestall government intervention. Facebook did take down thousands of fake accounts before the recent German election; however, bots are becoming more sophisticated and harder to detect. Twitter has created a transparency center that will show which organizations spent money on ads, how much money was spent and the demographic groups being targeted. Google recently created a fund to support media literacy on the part of students. There may also be technological ways to combat harmful bots, such as counter bots.
As a medium-sized country with a reasonably healthy democracy, Canada should not see itself as exempt from cyber threats. If Canada is to mitigate such threats, it will need co-operative social media companies, attentive governments, political parties and candidates that respect the norms of democratic behaviour, an appropriately equipped election agency and a vigilant public.
Paul G. Thomas is professor emeritus of political studies at the University of Manitoba. He serves on the Elections Canada Advisory Board, but this article represents his opinion only.