WEATHER ALERT

It takes a village to raise AI responsibly

Advertisement

Advertise with us

Anthropic, maker of the popular Claude artificial intelligence model, has been facing heat from the U.S. government over the ethics of military AI. Due to its safety-first approach, its AI was considered the best and was approved for use on classified military networks. It signed a lucrative contract with the Pentagon and was integrated into military systems. Sounds ominous, for sure.

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$1 per week for 24 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.95 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.99/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19.95 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

Opinion

Anthropic, maker of the popular Claude artificial intelligence model, has been facing heat from the U.S. government over the ethics of military AI. Due to its safety-first approach, its AI was considered the best and was approved for use on classified military networks. It signed a lucrative contract with the Pentagon and was integrated into military systems. Sounds ominous, for sure.

But the contract specified that the AI could not be used for fully autonomous weapons systems that can kill targets without involving human judgment, and for mass domestic surveillance of Americans. The Pentagon fought back against these restrictions, even though it signed the contract as such, insisting that the AI could be used for “all lawful purposes” and quickly sought to punish Anthropic for not capitulating to its demands.

Anthropic stood by its guardrails, both on principle and contract, standing up against the dangerous use of AI, risking the loss of government contracts and punishment from the autocratic regime. In solidarity, Sam Altman from OpenAI, Google’s AI division (Gemini AI) and others have supported the stand that these guardrails are necessary in a safe and democratic society. It is good news that there are red lines that AI should not cross and that the companies themselves are standing up against them.

But what struck me about this battle was a statement from an Anthropic executive in response to the Pentagon’s demands which read: “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” This defence is a clear definition of the limits of their AI model based on a deep understanding of its abilities as the creator of their technology. This becomes apparent when you look at how their model was developed.

Anthropic (a word that literally means concerning human existence) was started as an offshoot of OpenAI, and its founders felt that without a responsible approach, AI would never achieve its full potential for positive change in the world. Being focused on human interests, they built their model to be “helpful, harmless, and honest.” They even went as far as hiring philosophers, psychologists, and ethicists to ensure that it behaves with a moral compass. With this anthropomorphic approach, they trained their AI model to be more than just a powerhouse tool, but to have a well-rounded education. Like a good parent would do.

As an educator, I could relate to Anthropic’s approach. A well-rounded education requires training in many domains. For an AI model, cognitive understanding might be considered its dominant ability, with access to the world’s information and a computational framework able to organize the information in a way that can be understood by humans. But good education involves much more, such as emotional, social, physical, moral, ethical and cultural development. Kudos to the company for trying to integrate other capabilities into its model, providing a more human-like education for its AI.

Another important parental quality is understanding the limitations of their child, for its own protection and the protection of others. And “child” is an important term, because according to estimates, AI models operate at an age as young as a toddler. With this understanding, it makes sense that Anthropic would not want their AI model used for specific military uses. A toddler able to wield deadly military force — what could go wrong? You’ll have to ask the president.

This brings up our government’s response to the Tumbler Ridge tragedy. Artificial Intelligence Minister Evan Solomon was deeply disturbed that OpenAI did not inform the RCMP about the shooter’s interactions with ChatGPT (OpenAI is the parent company of ChatGPT). B.C. Premier David Eby decried OpenAI for not sharing information they had about the shooter. Both called on the company to do better to protect people.

But here is where they’re wrong. It is not up to companies to protect people, it is up to the government. Like a good parent, they must know when to intervene. Ministers cannot deflect blame onto companies, especially when they can enact policies to regulate AI use in Canada. Anthropic itself has stated that they would welcome regulatory frameworks, citing their approach to developing their AI was in response to the lack of a regulatory framework in the U.S.

In Canada we should be doing much better.

Steps must be taken to include any lessons from Tumbler Ridge into the long-put-off Online Harms Act, and make sure that future regulations are enacted quickly, in keeping with the fast pace of advancements and potential harms of rapidly evolving technologies.

Anthropic gives hope that AI can be trained to appeal to our better angels, that kindness, empathy and moral integrity can be part of a technology that seems so much like us. The struggle to understand its place, the duality between AI’s potential for good and bad, however, is all too human.

It takes a village to raise a child, and it also take a village to protect them.

David Nutbean writes from his home in Oakville, Manitoba. As a longtime administrator and teacher he understands the importance of carefully training emerging intelligence.

Report Error Submit a Tip

Analysis

LOAD MORE