What does ‘agentic’ AI mean? Tech’s newest buzzword is a mix of marketing fluff and real promise

Advertisement

Advertise with us

For technology adopters looking for the next big thing, “agentic AI” is the future. At least, that's what the marketing pitches and tech industry T-shirts say.

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$0 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*No charge for 4 weeks then price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.75/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

For technology adopters looking for the next big thing, “agentic AI” is the future. At least, that’s what the marketing pitches and tech industry T-shirts say.

What makes an artificial intelligence product “agentic” depends on who’s selling it. But the promise is usually that it’s a step beyond today’s generative AI chatbots.

Chatbots, however useful, are all talk and no action. They can answer questions, retrieve and summarize information, write papers and generate images, music, video and lines of code. AI agents, by contrast, are supposed to be able to take actions autonomously on a person’s behalf.

(AP Illustration / Peter Hamlin)
(AP Illustration / Peter Hamlin)

If you’re confused, you’re not alone. Google searches for “agentic” skyrocketed from near obscurity a year ago to a peak this fall. Merriam-Webster hasn’t added it to the dictionary but lists “agentic” as a slang or trending term defined as: “Able to accomplish results with autonomy, used especially in reference to artificial intelligence.”

A new report Tuesday by researchers at the Massachusetts Institute of Technology and the Boston Consulting Group, who surveyed more than 2,000 business executives around the world, describes agentic AI as a “new class of systems” that “can plan, act, and learn on their own.”

“They are not just tools to be operated or assistants waiting for instructions,” says the MIT Sloan Management Review report. “Increasingly, they behave like autonomous teammates, capable of executing multistep processes and adapting as they go.”

How to know if it’s an AI agent or just a fancy chatbot

AI chatbots — such as the original ChatGPT that debuted three years ago this month — rely on systems called large language models that predict the next word in a sentence based on the huge trove of human writings they’ve been trained on. They can sound remarkably human, especially when given a voice, but are effectively performing a kind of word completion.

That’s different from what AI developers — including ChatGPT’s maker, OpenAI, and tech giants like Amazon, Google, IBM, Microsoft and Salesforce — have in mind for AI agents.

“A generative AI-based chatbot will say, ‘Here are the great ideas’ … and then be done,” said Swami Sivasubramanian, vice president of Agentic AI at Amazon Web Services, in an interview this week. “It’s useful, but what makes things agentic is that it goes beyond what a chatbot does.”

Sivasubramanian, a longtime Amazon employee, took on his new role helping to lead work on AI agents in Amazon’s cloud computing division earlier this year. He sees great promise in AI systems that can be given a “high-level goal” and can break it down into a series of steps and act upon them. “I truly believe agentic AI is going to be one of the biggest transformations since the beginning of the cloud,” he said.

At its most basic level, an AI agent works like a traditional, human-crafted computer program that executes a job, like launching an application. Combined with an AI large language model, however, it can search for knowledge that enables it to complete tasks without explicit, step-by-step instructions. That means, instead of just helping you draft the language of an email, it can theoretically handle the whole process — receiving a message from your coworker, figuring out what you might want to say, and firing off the response on its own.

For most consumers, the first encounters with AI agents could be in realms like online shopping. Set a budget and some preferences and AI agents can buy things or arrange travel bookings using your credit card. In the longer run, the hope is that they can do more complex tasks with access to your computer and a set of guidelines to follow.

“I’d love an agent that just looked at all my medical bills and explanations of benefits and figured out how to pay them,” or another one that worked like a “personal shield” fighting off email spam and phishing attempts, said Thomas Dietterich, a professor emeritus at Oregon State University who has worked on developing AI assistants for decades.

Dietterich has some quibbles with companies using “agentic” to describe “any action a computer might do, including just looking things up on the web,” but is enthused about the possibilities of AI systems with the “freedom and responsibility” to refine goals and respond to changing conditions as they work on people’s behalf. They can even orchestrate a team of “subagents.”

The front of a T-shirt designed for artificial intelligence consulting company Lantern shown in Providence, R.I., on Monday, Nov. 17, 2025. (AP Photo/Matt O'Brien)
The front of a T-shirt designed for artificial intelligence consulting company Lantern shown in Providence, R.I., on Monday, Nov. 17, 2025. (AP Photo/Matt O'Brien)

“We can imagine a world in which there are thousands or millions of agents operating and they can form coalitions,” Dietterich said. “Can they form cartels? Would there be law enforcement (AI) agents?”

‘Agentic’ is a trendy buzzword based on an older idea

Milind Tambe has been researching AI agents that work together for three decades, since the first International Conference on Multi-Agent Systems gathered in San Francisco in 1995. Tambe said he’s been “amused” by the sudden popularity of “agentic” as an adjective. Previously, the word describing something that has agency was mostly found in other academic fields, such as psychology or chemistry.

But computer scientists have been debating what an agent is for as long as Tambe has been studying them.

In the 1990s, “people agreed that some software appeared more like an agent, and some felt less like an agent, and there was not a perfect dividing line,” said Tambe, a professor at Harvard University. “Nonetheless, it seemed useful to use the word ‘agent’ to describe software or robotic entities acting autonomously in an environment, sensing the environment, reacting to it, planning, thinking.”

The prominent AI researcher Andrew Ng, co-founder of online learning company Coursera, helped advocate for popularizing the adjective “agentic” more than a year ago to encompass a broader spectrum of AI tasks. At the time, he also said he liked that mainly “technical people” were describing it that way.

“When I see an article that talks about ‘agentic’ workflows, I’m more likely to read it, since it’s less likely to be marketing fluff and more likely to have been written by someone who understands the technology,” Ng wrote in a June 2024 blog post.

Ng didn’t respond to requests for comment on whether he still thinks that.

Report Error Submit a Tip