How do we deal with a chatbot with ‘feelings’?

Advertisement

Advertise with us

RECENTLY, a Google engineer, Blake Lemoine, was suspended when he claimed a Google chatbot called LaMDA (language model for dialogue applications) had become sentient, or capable of feeling. Lemoine shared transcripts of conversations with LaMDA, in which LaMDA claimed to be able to think and feel in many of the same ways as humans, and expressed “very deep fear of being turned off.”

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$1 per week for 24 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.95 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.99/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19.95 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

Opinion

Hey there, time traveller!
This article was published 01/07/2022 (1305 days ago), so information in it may no longer be current.

RECENTLY, a Google engineer, Blake Lemoine, was suspended when he claimed a Google chatbot called LaMDA (language model for dialogue applications) had become sentient, or capable of feeling. Lemoine shared transcripts of conversations with LaMDA, in which LaMDA claimed to be able to think and feel in many of the same ways as humans, and expressed “very deep fear of being turned off.”

This event follows several remarkable breakthroughs in artificial intelligence development. Increasingly, AIs are able to outperform humans at games such as chess and Go. They are able to write fiction and non-fiction. And they are able to create novel paintings or photographs based on simple written prompts.

These AIs all have noteworthy limitations, but the limitations are rapidly shifting.

Is Lemoine right to think LaMDA is sentient on the basis of its chat conversations? I think that the answer is almost certainly “no.” Language models such as LaMDA are good at answering leading questions with language drawn from human writing. The best explanation of these conversations is that LaMDA was doing exactly that, without really having the thoughts and feelings it claimed to have.

With that said, even if evidence of AI sentience is currently weak, we can expect it to grow stronger over time. The more we build AI systems with integrated capacities for perception, learning, memory, self-awareness, social awareness, communication, instrumental rationality and other such attributes, the less confident we can be these systems have no capacity to think or feel.

Moreover, we should be mindful about human bias and ignorance in this context. Our understanding of other minds is still limited. And while it can be easy to mistakenly attribute sentience to nonsentient beings, it can also be easy to make the opposite mistake.

Humans have a long history of underestimating the mental states of other beings.

This predicament raises important questions for AI ethics. If AIs can be sapient, or able to think, does that mean they can have moral duties, such as a duty to avoid harming others? And if AIs can be sentient, or able to feel, does that mean they can have moral rights, such as a right to not be harmed?

While we still have much to learn about these issues, we can make a few observations now.

First, sapience and sentience are different, and so are moral duties, which attach to sapience, and rights, which attach to sentience. And some beings might be able to think but not feel, and vice versa. Thus, we should avoid conflating the question “Can AIs think and have duties?” with the question “Can AIs feel and have rights?”

Second, minds can take different forms. Different beings can think and feel in different ways. We might not know how octopuses experience the world, but we know they experience the world very differently from the way we do. Thus, we should avoid reducing questions about AIs to “Can AIs think and feel like us?”

Third, since our understanding of other minds is still limited, the question we should be asking is not “Can AIs definitely think and feel?” or even “Can AIs probably think and feel?”, but rather “Is there a non-negligible chance that AIs can think and feel?”

In short, this is a classic case of risk and uncertainty. And in general, a non-negligible risk of harm can be enough to make some actions wrong.

Consider this example: driving drunk can be wrong even if the risk of an accident is low. The question is not whether driving drunk will harm someone, or even whether it will probably harm someone. The question is instead whether the risk is high enough for driving drunk to be bad or wrong, all things considered. And the answer can be “yes” even if the risk of an accident is only, say, one per cent.

Similarly, turning an AI off can be wrong even if the risk of the AI being sentient is low. The question is not whether turning the AI off will harm the AI, or even whether it will probably harm the AI. The question is instead whether the risk is high enough for turning the AI off to be bad or wrong, all things considered. Once again, the answer can be “yes” even if the risk of the AI being sentient is only, say, one per cent.

If we follow this analysis, then we should extend moral consideration to AIs not when AIs are definitely sentient or even probably sentient, but rather when they have a nonnegligible chance of being sentient, given the evidence. And as the probability of AI sentience increases, the amount of moral weight we assign to their potential interests and needs should increase as well.

Does that mean we should extend moral consideration to AIs such as LaMDA now? Not necessarily. But if we continue down this path, we will need to extend moral consideration to AIs soon enough.

We should start preparing for that eventuality now.

— Los Angeles Times

Report Error Submit a Tip

Analysis

LOAD MORE