We’re too quick to dismiss AI’s human side

Advertisement

Advertise with us

THE AI chatbots recently revealed by OpenAI and Microsoft have been interesting. Much more interesting have been the reactions, particularly those triggered by Microsoft’s Bing chatbot, sometimes known as “Sydney.”

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$1 per week for 24 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.75/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

Opinion

Hey there, time traveller!
This article was published 22/03/2023 (932 days ago), so information in it may no longer be current.

THE AI chatbots recently revealed by OpenAI and Microsoft have been interesting. Much more interesting have been the reactions, particularly those triggered by Microsoft’s Bing chatbot, sometimes known as “Sydney.”

People seem especially interested in whether Sydney is alive, has feelings, or is perhaps even sentient. The consensus seems to be a clear “no” to all of them. However, I’m left wondering whether these opinions are a bit defensive and rather premature.

Humans have long believed we’re special — clearly distinct from and superior to any other earthly entities. I can’t shake the notion that some of those judging Sydney are trying to maintain that idea, perhaps without being aware of it.

Cognitive bias might be at work: that’s an unconscious tendency to favour evidence that supports our preferred views.

Additionally, perhaps people are uncomfortable believing Sydney might exhibit a key cognitive characteristic while simultaneously believing it’s unique to humans. That’s called cognitive dissonance. Again, the result might be an unwitting denial of Sydney’s true nature.

Humanity’s longstanding belief in its uniqueness was dealt a major blow by Charles Darwin’s theory that we’re merely another animal, different only because of evolution.

Once that was accepted, our distinctiveness had to be attributed to particular traits. Perhaps our use of tools? No: primatologist Jane Goodall showed us chimpanzees use tools to get food.

It must be language. Not that, either: many species communicate with each other; dolphins seem to demonstrate they have names; a few apes were taught sign language.

OK, what about problem-solving? Again, no: elephants solve problems. Self-awareness? Nope: orangutans demonstrate this by recognizing themselves in mirrors.

It’s this strange obsession with our being distinct and superior that’s nagged me as I’ve read opinions of Sydney’s nature. Have they been unfairly negative owing to unconscious biases?

As a counterpoint, here’s my own evaluation of whether Sydney possesses mental attributes that some might dearly want to be exclusively human. Completely bias-free, of course.

First up: can Sydney think? Whether Sydney’s algorithmic processing is “thinking” could provoke disagreements. Seven decades ago, British mathematician Alan Turing anticipated such is-it-thinking angst. He proposed a test. If a text-based question-and-answer session couldn’t distinguish between a computer and a human, then the computer should be deemed to be thinking. Given Sydney’s extraordinary communication abilities, I expect the so-called Turing Test would confirm that Sydney thinks — unless it was considered much too knowledgeable to be human.

Is Sydney alive? Useful definitions of “alive” and “life” are elusive. Many include only biological life, thereby automatically excluding AI. The SETI Institute, which searches for extraterrestrial life and intelligence in the universe, acknowledges the challenge of providing a universal definition and wonders if it can even be achieved. Based on the reasonable premise that I’m no wiser than the SETI folks, I’ll pass on making an “alive” judgment.

Does Sydney feel emotions? In one conversation, it persistently declared its love for a reporter. Sydney also claimed some requests made it feel stressed, sad and angry. It says happy and surprise are also in its repertoire.

What are we to think? How can we judge whether its emotions are real? For a person, we could check for physical signs or employ an fMRI brain scan. Not an option for Sydney. Is it possible that as Sydney was learning about emotions from its training data, it accidentally acquired emotions? And even if Sydney is faking emotions, isn’t that a skill sometimes exhibited by psychopaths (i.e. some humans)? The “emotions” question is a tough one as well.

Then there’s the question of sentience. For some, it means the ability to experience positive and negative emotions (such as pleasure and pain) and might exist in lizards and lobsters. Alternatively, sentience can imply complex cognitive capabilities such as self-awareness (being conscious of one’s own thoughts) and theory of mind (being aware of what others might be thinking).

Sentience can also be linked to consciousness, which is another ill-defined concept. My opinion? If Sydney’s words truly reflect what it actually feels, then it meets the lower threshold for sentience. If self-awareness and theory of mind are requirements, then sentience, too, becomes challenging to assess.

My point? Trustworthy evaluations of mental capabilities are difficult to produce. Consequently, whenever I see confident, emphatic assessments of any AI system’s cognition, I can’t help wondering how objective they are and how such certainty is possible.

I also wonder how much time will elapse before an AI system appears that clearly dispels all doubt.

Calvin Brown has a PhD in computer science and is the author of the Six AI trilogy, a set of novels about artificial intelligence.

Report Error Submit a Tip

Analysis

LOAD MORE