‘Robot rebellion’ irrational fear
Advertisement
Read this article for free:
or
Already have an account? Log in here »
To continue reading, please subscribe:
Monthly Digital Subscription
$1 per week for 24 weeks*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.
Monthly Digital Subscription
$4.75/week*
- Enjoy unlimited reading on winnipegfreepress.com
- Read the E-Edition, our digital replica newspaper
- Access News Break, our award-winning app
- Play interactive puzzles
*Billed as $19 plus GST every four weeks. Cancel any time.
To continue reading, please subscribe:
Add Winnipeg Free Press access to your Brandon Sun subscription for only
$1 for the first 4 weeks*
*$1 will be added to your next bill. After your 4 weeks access is complete your rate will increase by $0.00 a X percent off the regular rate.
Read unlimited articles for free today:
or
Already have an account? Log in here »
Hey there, time traveller!
This article was published 30/07/2020 (1894 days ago), so information in it may no longer be current.
Artificial intelligence is everywhere. It helps drive your car, recognizes your face at the airport’s immigration checkpoint, interprets your CT scans, reads your resumé, traces your interactions on social media and even vacuums your carpet. As AI encroaches on every aspect of our lives, people watch with a mixture of fascination, bewilderment and fear.
AI’s overthrow of humanity is a familiar trope in popular culture, from Isaac Asimov’s I, Robot to the Terminator movies and The Matrix. Some scholars express similar concerns. The Oxford philosopher Nick Bostrom worries that artificial intelligence poses a greater threat to humanity than climate change, and the bestselling historian Yuval Noah Harari warns that the history of tomorrow may belong to the cult of Dataism, in which humanity willingly merges itself into the flow of information controlled by artificial systems.
But in truth, these doomsday scenarios are nowhere in sight. In a critical evaluation of AI, the cognitive and computer scientists Gary Marcus and Ernest Davies demonstrate that the state of the art of AI is still quite far from true intelligence. When asked to provide a list of restaurants that are not McDonald’s, Siri still spits out a list of local McDonald’s restaurants; she just doesn’t get the “no” part of “no.” AI can also fail to recognize familiar objects in unfamiliar contexts (a baby on the highway) or to separate associations from causes. In short, AI still lacks “common sense.”

Make no mistake — AI does pose many real dangers to us: to our personal privacy and security, to our democracy and to the future of the economy. These are all very good reasons to watch it closely and regulate it aggressively.
People don’t merely worry that the new technology could cause accidents or fall into the wrong hands. With AI, people worry that it will acquire autonomous agency and outsmart and overthrow its human masters. The question is why.
In fact, humanity’s worries about being conquered by omnipotent, inanimate, man-made artifacts is much older than computer technology. In the 19th century, Mary Shelley’s Dr. Frankenstein created a humanoid monster who promptly rebelled. Tales such as this suggest that our fear of AI arises not from AI itself, but from the human mind.
This fear emanates from the psychological distinction we draw between mind and matter. If you saw a ball start to roll all by itself, you’d be astonished. But you wouldn’t be the least bit surprised to see me spontaneously rise from my seat on the couch and head toward the refrigerator.
That is because we instinctively interpret the actions of physical objects, like balls, and living agents, like people, according to different sets of principles. In our intuitive psychology, objects like balls always obey the laws of physics — they move only by contact with other objects. People, in contrast, are agents who have minds of their own, which endow them with knowledge, beliefs, and goals that motivate them to move on their own accord. We thus ascribe human actions, not to external material forces, but to internal mental states.
Of course, most modern adults know that thought occurs in the physical brain. But deep down, we feel otherwise. Our unconscious intuitive psychology causes us to believe that thinking is free from the physical constraints on matter. The psychologist Paul Bloom suggests that intuitively, all people are dualists, believing that mind and matter are entirely distinct.
AI violates this bedrock belief. Siri and Roomba are man-made artifacts, but they exhibit some of the same intelligent behaviour we typically ascribe to living agents. Their acts, like ours, are impelled by information (thinking), but their thinking arises from silicon, metal, plastic and glass. While in our intuitive psychology thinking minds, animacy and agency all go hand in hand, Siri and Roomba demonstrate that these properties can be severed — they think, but they are mindless; they are inanimate but semiautonomous.
People don’t tolerate this cognitive dissonance for very long. When we are faced with a fundamental challenge to our core beliefs, we tend to stick to our guns. Rather than revising our assumptions to match the facts, we tend to bend reality to fit our assumptions, especially when our worldview is at stake.
So rather than admitting the possibility that machines endowed with AI can think, we ascribe to them immaterial mind and agency, and once we do, our view of AI shifts from faithful servant to rebellious menace.
Thus, the AI takeover narrative, its power and timelessness, arises directly from our core — from a cognitive principle that seems to be part of human nature.
While none of this proves that the “robot rebellion” is impossible, it would be a mistake to ignore our own preset beliefs that contribute to these fears.
When we focus so much of our attention on improbable scenarios, we run the risk of ignoring other problems posed by AI that are pressing and preventable. Before we can give those very real dangers the attention they deserve, we should rein in our irrational fears that arise from within.
Iris Berent, a professor of psychology at Northeastern University, is author of The Blind Storyteller: How We Reason About Human Nature.
— Los Angeles Times