intelligent agent

{{Short description|Software agent which acts autonomously}}

{{For|the term in intelligent design|Intelligent designer}}

{{distinguish|text = Embodied agent}}

{{merge from|Agentic AI|discuss=Talk:Agentic AI#Merge proposal|date=May 2025}}

File:IntelligentAgent-SimpleReflex.png

In artificial intelligence, an intelligent agent is an entity that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through machine learning or by acquiring knowledge. Leading AI textbooks define artificial intelligence as the "study and design of intelligent agents," emphasizing that goal-directed behavior is central to intelligence.

A specialized subset of intelligent agents, agentic AI (also known as an AI agent or simply agent), expands this concept by proactively pursuing goals, making decisions, and taking actions over extended periods, thereby exemplifying a novel form of digital agency.{{Cite SSRN

| last1 = Mukherjee

| first1 = Anirban

| last2 = Chang

| first2 = Hannah

| date = 2025-02-01

| title = Agentic AI: Expanding the Algorithmic Frontier of Creative Problem Solving

| ssrn = 5123621

}}

Intelligent agents can range from simple to highly complex. A basic thermostat or control system is considered an intelligent agent, as is a human being, or any other system that meets the same criteria—such as a firm, a state, or a biome.{{sfn|Russell|Norvig|2003|loc=chpt. 2}}

Intelligent agents operate based on an objective function, which encapsulates their goals. They are designed to create and execute plans that maximize the expected value of this function upon completion.{{cite encyclopedia|last1=Bringsjord|first1=Selmer|last2=Govindarajulu|first2=Naveen Sundar|title=Artificial Intelligence|encyclopedia=The Stanford Encyclopedia of Philosophy (Summer 2020 Edition)|date=12 July 2018 |editor=Edward N. Zalta|url=https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/}} For example, a reinforcement learning agent has a reward function, which allows programmers to shape its desired behavior.{{cite news |last1=Wolchover |first1=Natalie |title=Artificial Intelligence Will Do What We Ask. That's a Problem. |url=https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/ |access-date=21 June 2020 |work=Quanta Magazine |date=30 January 2020 |language=en}} Similarly, an evolutionary algorithm's behavior is guided by a fitness function.{{cite journal|last=Bull|first=Larry|title=On model-based evolutionary computation|journal=Soft Computing|volume=3|issue=2|date=1999|pages=76–82|doi=10.1007/s005000050055|s2cid=9699920}}

Intelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, and the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations.

Intelligent agents are often described schematically as abstract functional systems similar to computer programs. To distinguish theoretical models from real-world implementations, abstract descriptions of intelligent agents are called abstract intelligent agents. Intelligent agents are also closely related to software agents—autonomous computer programs that carry out tasks on behalf of users. They are also referred to using a term borrowed from economics: a "rational agent".{{sfn|Russell|Norvig|2003|loc=chpt. 2}}

Intelligent agents as the foundation of AI

{{Original research|section|date=February 2023|discuss=Talk:Intelligent agent#As a definition of artificial intelligence}}

The concept of intelligent agents provides a foundational lens through which to define and understand artificial intelligence. For instance, the influential textbook Artificial Intelligence: A Modern Approach (Russell & Norvig) describes:

  • Agent: Anything that perceives its environment (using sensors) and acts upon it (using actuators). E.g., a robot with cameras and wheels, or a software program that reads data and makes recommendations.
  • Rational Agent: An agent that strives to achieve the *best possible outcome* based on its knowledge and past experiences. "Best" is defined by a performance measure – a way of evaluating how well the agent is doing.
  • Artificial Intelligence (as a field): The study and creation of these rational agents.

Other researchers and definitions build upon this foundation. Padgham & Winikoff emphasize that intelligent agents should react to changes in their environment in a timely way, proactively pursue goals, and be flexible and robust (able to handle unexpected situations). Some also suggest that ideal agents should be "rational" in the economic sense (making optimal choices) and capable of complex reasoning, like having beliefs, desires, and intentions (BDI model). Kaplan and Haenlein offer a similar definition, focusing on a system's ability to understand external data, learn from that data, and use what is learned to achieve goals through flexible adaptation.

Defining AI in terms of intelligent agents offers several key advantages:

  • Avoids Philosophical Debates: It sidesteps arguments about whether AI is "truly" intelligent or conscious, like those raised by the Turing test or Searle's Chinese Room. It focuses on behavior and goal achievement, not on replicating human thought.
  • Objective Testing: It provides a clear, scientific way to evaluate AI systems. Researchers can compare different approaches by measuring how well they maximize a specific "goal function" (or objective function). This allows for direct comparison and combination of techniques.
  • Interdisciplinary Communication: It creates a common language for AI researchers to collaborate with other fields like mathematical optimization and economics, which also use concepts like "goals" and "rational agents."

Objective function

{{Further-text|utility function (economics)|loss function (mathematics)}}

An objective function (or goal function) specifies the goals of an intelligent agent. An agent is deemed more intelligent if it consistently selects actions that yield outcomes better aligned with its objective function. In effect, the objective function serves as a measure of success.

The objective function may be:

  • Simple: For example, in a game of Go, the objective function might assign a value of 1 for a win and 0 for a loss.
  • Complex: It might require the agent to evaluate and learn from past actions, adapting its behavior based on patterns that have proven effective.

The objective function encapsulates all of the goals the agent is designed to achieve. For rational agents, it also incorporates the trade-offs between potentially conflicting goals. For instance, a self-driving car's objective function might balance factors such as safety, speed, and passenger comfort.

Different terms are used to describe this concept, depending on the context. These include:

  • Utility function: Often used in economics and decision theory, representing the desirability of a state.
  • Objective function: A general term used in optimization.
  • Loss function: Typically used in machine learning, where the goal is to minimize the loss (error).
  • Reward Function: Used in reinforcement learning.
  • Fitness Function: Used in evolutionary systems.

Goals, and therefore the objective function, can be:

  • Explicitly defined: Programmed directly into the agent.
  • Induced: Learned or evolved over time.
  • In reinforcement learning, a "reward function" provides feedback, encouraging desired behaviors and discouraging undesirable ones. The agent learns to maximize its cumulative reward.
  • In evolutionary systems, a "fitness function" determines which agents are more likely to reproduce. This is analogous to natural selection, where organisms evolve to maximize their chances of survival and reproduction.{{sfn|Domingos|2015|loc=Chapter 5}}

Some AI systems, such as nearest-neighbor, reason by analogy rather than being explicitly goal-driven. However, even these systems can have goals implicitly defined within their training data.{{sfn|Domingos|2015|loc=Chapter 7}} Such systems can still be benchmarked by framing the non-goal system as one whose "goal" is to accomplish its narrow classification task.Lindenbaum, M., Markovitch, S., & Rusakov, D. (2004). Selective sampling for nearest neighbor classifiers. Machine learning, 54(2), 125–152.

Systems not traditionally considered agents, like knowledge-representation systems, are sometimes included in the paradigm by framing them as agents with a goal of, for example, answering questions accurately. Here, the concept of an "action" is extended to encompass the "act" of providing an answer. As a further extension, mimicry-driven systems can be framed as agents optimizing a "goal function" based on how closely the IA mimics the desired behavior. In generative adversarial networks (GANs) of the 2010s, an "encoder"/"generator" component attempts to mimic and improvise human text composition. The generator tries to maximize a function representing how well it can fool an antagonistic "predictor"/"discriminator" component.{{cite news |title=Generative adversarial networks: What GANs are and how they've evolved |url=https://venturebeat.com/2019/12/26/gan-generative-adversarial-network-explainer-ai-machine-learning/ |access-date=18 June 2020 |work=VentureBeat |date=26 December 2019}}

While symbolic AI systems often use an explicit goal function, the paradigm also applies to neural networks and evolutionary computing. Reinforcement learning can generate intelligent agents that appear to act in ways intended to maximize a "reward function".{{cite news |last1=Wolchover |first1=Natalie |title=Artificial Intelligence Will Do What We Ask. That's a Problem. |url=https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/ |access-date=18 June 2020 |work=Quanta Magazine |date=January 2020 |language=en}} Sometimes, instead of setting the reward function directly equal to the desired benchmark evaluation function, machine learning programmers use reward shaping to initially give the machine rewards for incremental progress.Andrew Y. Ng, Daishi Harada, and Stuart Russell. "Policy invariance under reward transformations: Theory and application to reward shaping." In ICML, vol. 99, pp. 278-287. 1999. Yann LeCun stated in 2018, "Most of the learning algorithms that people have come up with essentially consist of minimizing some objective function."Martin Ford. Architects of Intelligence: The truth about AI from the people building it. Packt Publishing Ltd, 2018. AlphaZero chess had a simple objective function: +1 point for each win, and -1 point for each loss. A self-driving car's objective function would be more complex.{{cite news |title=Why AlphaZero's Artificial Intelligence Has Trouble With the Real World |url=https://www.quantamagazine.org/why-alphazeros-artificial-intelligence-has-trouble-with-the-real-world-20180221/ |access-date=18 June 2020 |work=Quanta Magazine |date=2018 |language=en}} Evolutionary computing can evolve intelligent agents that appear to act in ways intended to maximize a "fitness function" influencing how many descendants each agent is allowed to leave.

The mathematical formalism of AIXI was proposed as a maximally intelligent agent in this paradigm.{{cite journal |last1=Adams |first1=Sam |last2=Arel |first2=Itmar |last3=Bach |first3=Joscha |last4=Coop |first4=Robert |last5=Furlan |first5=Rod |last6=Goertzel |first6=Ben |last7=Hall |first7=J. Storrs |last8=Samsonovich |first8=Alexei |last9=Scheutz |first9=Matthias |last10=Schlesinger |first10=Matthew |last11=Shapiro |first11=Stuart C. |last12=Sowa |first12=John |title=Mapping the Landscape of Human-Level Artificial General Intelligence |journal=AI Magazine |date=15 March 2012 |volume=33 |issue=1 |pages=25 |doi=10.1609/aimag.v33i1.2322|doi-access=free }} However, AIXI is uncomputable. In the real world, an IA is constrained by finite time and hardware resources, and scientists compete to produce algorithms that achieve progressively higher scores on benchmark tests with existing hardware.{{cite news |last1=Hutson |first1=Matthew |title=Eye-catching advances in some AI fields are not real |url=https://www.science.org/content/article/eye-catching-advances-some-ai-fields-are-not-real |access-date=18 June 2020 |work=Science {{!}} AAAS |date=27 May 2020 |language=en}}

Agent function

An intelligent agent's behavior can be described mathematically by an agent function. This function determines what the agent does based on what it has seen.

A percept refers to the agent's sensory inputs at a single point in time. For example, a self-driving car's percepts might include camera images, lidar data, GPS coordinates, and speed readings at a specific instant. The agent uses these percepts, and potentially its history of percepts, to decide on its next action (e.g., accelerate, brake, turn).

The agent function, often denoted as f, maps the agent's entire history of percepts to an action.{{Harvnb|Russell|Norvig|2003|p=33}}

Mathematically, this can be represented as:

:f : P^* \rightarrow A

Where:

  • P\* represents the set of all possible percept sequences (the agent's entire perceptual history). The asterisk (*) indicates a sequence of zero or more percepts.
  • A represents the set of all possible actions the agent can take.
  • f is the agent function that maps a percept sequence to an action.

It's crucial to distinguish between the agent function (an abstract mathematical concept) and the agent program (the concrete implementation of that function).

  • The agent function is a theoretical description.
  • The agent program is the actual code that runs on the agent. The agent program takes the current percept as input and produces an action as output.

The agent function can incorporate a wide range of decision-making approaches, including:{{cite book |last1=Salamon |first1=Tomas |url=http://www.designofagentbasedmodels.info/ |title=Design of Agent-Based Models |publisher=Bruckner Publishing |year=2011 |isbn=978-80-904661-1-1 |location=Repin |pages=42–59}}

  • Calculating the utility (desirability) of different actions.
  • Using logical rules and deduction.
  • Employing fuzzy logic.
  • Other methods.

Classes of intelligent agents

= Russell and Norvig's classification =

{{Harvtxt|Russell|Norvig|2003}} group agents into five classes based on their degree of perceived intelligence and capability:{{Harvnb|Russell|Norvig|2003|pp=46–54}}

== Simple reflex agents ==

Image:Simple reflex agent.png

Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function is based on the condition-action rule: "if condition, then action".

This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.

Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. If the agent can randomize its actions, it may be possible to escape from infinite loops.

A home thermostat, which turns on or off when the temperature drops below a certain point, is an example of a simple reflex agent.{{Cite web |last=Thakur |first=Shreeya |title=AI Agents: 5 Key Types Explained With Examples // Unstop |url=https://unstop.com/blog/types-of-agents-in-artificial-intelligence |access-date=2025-04-24 |website=unstop.com |language=en}}{{Cite web |date=2025-03-17 |title=Types of AI Agents {{!}} IBM |url=https://www.ibm.com/think/topics/ai-agent-types |access-date=2025-04-24 |website=www.ibm.com |language=en}}

== Model-based reflex agents ==

Image:Model based reflex agent.png

A model-based agent can handle partially observable environments. Its current state is stored inside the agent, maintaining a structure that describes the part of the world which cannot be seen. This knowledge about "how the world works" is referred to as a model of the world, hence the name "model-based agent".

A model-based reflex agent should maintain some sort of internal model that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Percept history and impact of action on the environment can be determined by using the internal model. It then chooses an action in the same way as reflex agent.

An agent may also use models to describe and predict the behaviors of other agents in the environment.Stefano Albrecht and Peter Stone (2018). Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems.

Artificial Intelligence, Vol. 258, pp. 66-95. https://doi.org/10.1016/j.artint.2018.01.002

== Goal-based agents ==

Image:Model based goal based agent.png

Goal-based agents further expand on the capabilities of the model-based agents, by using "goal" information. Goal information describes situations that are desirable. This provides the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. Search and planning are the subfields of artificial intelligence devoted to finding action sequences that achieve the agent's goals.

ChatGPT and the Roomba vacuum are examples of goal-based agents.{{Cite web |date=2024-12-24 |title=What is an AI agent? A computer scientist explains the next wave of artificial intelligence tools |url=https://www.inverse.com/tech/ai-agents-roomba-and-artificial-intelligence |access-date=2025-04-24 |website=Inverse |language=en}}

== Utility-based agents ==

Image:Model based utility based.png

Goal-based agents only distinguish between goal states and non-goal states. It is also possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state. A more general performance measure should allow a comparison of different world states according to how well they satisfied the agent's goals. The term utility can be used to describe how "happy" the agent is.

A rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes - that is, what the agent expects to derive, on average, given the probabilities and utilities of each outcome. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning.

== Learning agents ==

Image:IntelligentAgent-Learning.svg

Learning lets agents begin in unknown environments and gradually surpass the bounds of their initial knowledge. A key distinction in such agents is the separation between a "learning element," responsible for improving performance, and a "performance element," responsible for choosing external actions.

The learning element gathers feedback from a "critic" to assess the agent’s performance and decides how the performance element—also called the "actor"—can be adjusted to yield better outcomes. The performance element, once considered the entire agent, interprets percepts and takes actions.

The final component, the "problem generator," suggests new and informative experiences that encourage exploration and further improvement.

= Weiss's classification =

According to {{Harvtxt|Weiss|2013}}, agents can be categorized into four classes:

  • Logic-based agents, where decisions about actions are derived through logical deduction.
  • Reactive agents, where decisions occur through a direct mapping from situation to action.
  • Belief–desire–intention agents, where decisions depend on manipulating data structures that represent the agent's beliefs, desires, and intentions.
  • Layered architectures, where decision-making takes place across multiple software layers, each of which reasons about the environment at a different level of abstraction.

= Other =

In 2013, Alexander Wissner-Gross published a theory exploring the relationship between Freedom and Intelligence in intelligent agents.{{Cite web |last=Box |first=Geeks out of the |date=2019-12-04 |title=A Universal Formula for Intelligence |url=https://geeksoutofthebox.com/2019/12/04/a-universal-formula-for-intelligence/ |access-date=2022-10-11 |website=Geeks out of the box |language=en}}{{Cite journal |last1=Wissner-Gross |first1=A. D. |last2=Freer |first2=C. E. |date=2013-04-19 |title=Causal Entropic Forces |journal=Physical Review Letters |volume=110 |issue=16 |pages=168702 |pmid=23679649 |bibcode=2013PhRvL.110p8702W |doi=10.1103/PhysRevLett.110.168702 |doi-access=free |hdl=1721.1/79750 |hdl-access=free }}

Hierarchies of agents

{{Main|Multi-agent system}}

Intelligent agents can be organized hierarchically into multiple "sub-agents." These sub-agents handle lower-level functions, and together with the main agent, they form a complete system capable of executing complex tasks and achieving challenging goals.

Typically, an agent is structured by dividing it into sensors and actuators. The perception system gathers input from the environment via the sensors and feeds this information to a central controller, which then issues commands to the actuators. Often, a multilayered hierarchy of controllers is necessary to balance the rapid responses required for low-level tasks with the more deliberative reasoning needed for high-level objectives.{{cite web|last1=Poole|first1=David|last2=Mackworth|first2=Alan|title=1.3 Agents Situated in Environments‣ Chapter 2 Agent Architectures and Hierarchical Control‣ Artificial Intelligence: Foundations of Computational Agents, 2nd Edition|url=https://artint.info/2e/html/ArtInt2e.Ch2.S3.html|website=artint.info|access-date=28 November 2018}}

Alternative definitions and uses

"Intelligent agent" is also often used as a vague term, sometimes synonymous with "virtual personal assistant".{{cite news |last1=Fingar |first1=Peter |date=2018 |title=Competing For The Future With Intelligent Agents... And A Confession |url=https://www.forbes.com/sites/cognitiveworld/2018/11/11/competing-for-the-future-with-intelligent-agents-and-a-confession/ |access-date=18 June 2020 |work=Forbes Sites |language=en}} Some 20th-century definitions characterize an agent as a program that aids a user or that acts on behalf of a user.{{cite arXiv | eprint=0902.3513 | last1=Burgin | first1=Mark | last2=Dodig-Crnkovic | first2=Gordana | title=A Systematic Approach to Artificial Agents | date=2009 | class=cs.AI }} These examples are known as software agents, and sometimes an "intelligent software agent" (that is, a software agent with intelligence) is referred to as an "intelligent agent".

According to Nikola Kasabov, IA systems should exhibit the following characteristics:{{sfn|Kasabov|1998}}

  • Accommodate new problem solving rules incrementally.
  • Adapt online and in real time.
  • Are able to analyze themselves in terms of behavior, error and success.
  • Learn and improve through interaction with the environment (embodiment).
  • Learn quickly from large amounts of data.
  • Have memory-based exemplar storage and retrieval capacities.
  • Have parameters to represent short- and long-term memory, age, forgetting, etc.

= Agentic AI =

{{Main|Agentic AI}}

In the context of generative artificial intelligence, AI agents (also referred to as compound AI systems) are a class of intelligent agents distinguished by their ability to operate autonomously in complex environments. Agentic AI tools prioritize decision-making over content creation and do not require human prompts or continuous oversight.{{Cite news |last=Purdy |first=Mark |date=2024-12-12 |title=What Is Agentic AI, and How Will It Change Work? |url=https://hbr.org/2024/12/what-is-agentic-ai-and-how-will-it-change-work |access-date=2025-04-24 |work=Harvard Business Review |language=en |issn=0017-8012}}

They possess several key attributes, including complex goal structures, natural language interfaces, the capacity to act independently of user supervision, and the integration of software tools or planning systems. Their control flow is frequently driven by large language models (LLMs).{{cite arXiv | eprint=2407.01502 | last1=Kapoor | first1=Sayash | last2=Stroebl | first2=Benedikt | last3=Siegel | first3=Zachary S. | last4=Nadgir | first4=Nitya | last5=Narayanan | first5=Arvind | title=AI Agents That Matter | date=2024 | class=cs.LG }}

Researchers and commentators have noted that AI agents do not have a standard definition.{{Cite web |last1=Zeff |first1=Maxwell |last2=Wiggers |first2=Kyle |date=2025-03-14 |title=No one knows what the hell an AI agent is |url=https://techcrunch.com/2025/03/14/no-one-knows-what-the-hell-an-ai-agent-is/ |archive-url=https://web.archive.org/web/20250318134231/https://techcrunch.com/2025/03/14/no-one-knows-what-the-hell-an-ai-agent-is/ |archive-date=2025-03-18 |access-date=2025-05-15 |website=TechCrunch |language=en-US}}{{Cite web |last=Varanasi |first=Lakshmi |title=AI agents are all the rage. But no one can agree on what they do. |url=https://www.businessinsider.com/what-is-an-ai-agent-depends-who-you-ask-2025-3 |archive-url=https://web.archive.org/web/20250411143511mp_/https://www.businessinsider.com/what-is-an-ai-agent-depends-who-you-ask-2025-3 |archive-date=2025-04-11 |access-date=2025-05-15 |website=Business Insider |language=en-US}}{{Cite web |last=Bort |first=Julie |date=2025-05-12 |title=Even a16z VCs say no one really knows what an AI agent is |url=https://techcrunch.com/2025/05/12/even-a16z-vcs-say-no-one-really-knows-what-an-ai-agent-is/ |archive-url=https://web.archive.org/web/20250512184704/https://techcrunch.com/2025/05/12/even-a16z-vcs-say-no-one-really-knows-what-an-ai-agent-is/ |archive-date=2025-05-12 |access-date=2025-05-15 |website=TechCrunch |language=en-US}}

A common application of AI agents is the automation of tasks—for example, booking travel plans based on a user's prompted request.{{Cite web |date=2024-12-30 |title=AI Agents: The Next Generation of Artificial Intelligence |url=https://natlawreview.com/article/next-generation-ai-here-come-agents |archive-url=https://web.archive.org/web/20250111192703/https://natlawreview.com/article/next-generation-ai-here-come-agents |archive-date=2025-01-11 |access-date=2025-01-14 |website=The National Law Review |language=en}}{{Cite web |date=2024-12-16 |title=What are the risks and benefits of 'AI agents'? |url=https://www.weforum.org/stories/2024/12/ai-agents-risks-artificial-intelligence/ |archive-url=http://web.archive.org/web/20241228013835/https://www.weforum.org/stories/2024/12/ai-agents-risks-artificial-intelligence/ |archive-date=2024-12-28 |access-date=2025-01-14 |website=World Economic Forum |language=en}} Prominent examples include Devin AI, AutoGPT, and SIMA.{{Cite magazine |last=Knight |first=Will |date=2024-03-14 |title=Forget Chatbots. AI Agents Are the Future |url=https://www.wired.com/story/fast-forward-forget-chatbots-ai-agents-are-the-future/ |archive-url=https://web.archive.org/web/20250105095231/https://www.wired.com/story/fast-forward-forget-chatbots-ai-agents-are-the-future/ |archive-date=2025-01-05 |access-date=2025-01-14 |magazine=Wired |language=en-US |issn=1059-1028}} Further examples of agents released since 2025 include OpenAI Operator,{{Cite web |last=Marshall |first=Matt |date=2025-02-22 |title=The rise of browser-use agents: Why Convergence's Proxy is beating OpenAI's Operator |url=https://venturebeat.com/ai/the-rise-of-browser-use-agents-why-convergences-proxy-is-beating-openais-operator/ |archive-url=https://web.archive.org/web/20250222231546/https://venturebeat.com/ai/the-rise-of-browser-use-agents-why-convergences-proxy-is-beating-openais-operator/ |archive-date=2025-02-22 |access-date=2025-04-02 |website=VentureBeat |language=en-US}} ChatGPT Deep Research,{{Cite news |last=Milmo |first=Dan |date=2025-02-03 |title=OpenAI launches 'deep research' tool that it says can match research analyst |url=https://www.theguardian.com/technology/2025/feb/03/openai-deep-research-agent-chatgpt-deepseek |archive-url=https://web.archive.org/web/20250203142402/https://www.theguardian.com/technology/2025/feb/03/openai-deep-research-agent-chatgpt-deepseek |archive-date=2025-02-03 |access-date=2025-04-02 |work=The Guardian |language=en-GB |issn=0261-3077}} and Manus.{{Cite web |last=Chen |first=Caiwei |date=2025-03-11 |title=Everyone in AI is talking about Manus. We put it to the test. |url=https://www.technologyreview.com/2025/03/11/1113133/manus-ai-review/ |archive-url=https://web.archive.org/web/20250312113852/https://www.technologyreview.com/2025/03/11/1113133/manus-ai-review/ |archive-date=2025-03-12 |access-date=2025-04-02 |website=MIT Technology Review |language=en}} Frameworks for building AI agents include LangChain,{{Cite web |last=David |first=Emilia |date=2024-12-30 |title=Why 2025 will be the year of AI orchestration |url=https://venturebeat.com/ai/three-ways-2025-will-be-the-year-of-agentic-productivity/ |archive-url=https://web.archive.org/web/20241230175615/https://venturebeat.com/ai/three-ways-2025-will-be-the-year-of-agentic-productivity/ |archive-date=2024-12-30 |access-date=2025-01-14 |website=VentureBeat |language=en-US}} as well as tools such as CAMEL,{{Cite web |title=CAMEL: Finding the Scaling Law of Agents. The first and the best multi-agent framework. |url=https://github.com/camel-ai/camel/ |website=GitHub}}{{cite journal | last =Li | first =Guohao | title =Camel: Communicative agents for "mind" exploration of large language model society | journal = Advances in Neural Information Processing Systems | volume = 36 | pages = 51991–52008 | year = 2023 | url = https://proceedings.neurips.cc/paper_files/paper/2023/file/a3621ee907def47c1b952ade25c67698-Paper-Conference.pdf | arxiv = 2303.17760 | s2cid = 257900712}} Microsoft AutoGen,{{Cite web |last=Dickson |first=Ben |date=2023-10-03 |title=Microsoft's AutoGen framework allows multiple AI agents to talk to each other and complete your tasks |url=https://venturebeat.com/ai/microsofts-autogen-framework-allows-multiple-ai-agents-to-talk-to-each-other-and-complete-your-tasks/ |archive-url=https://web.archive.org/web/20241227061127/https://venturebeat.com/ai/microsofts-autogen-framework-allows-multiple-ai-agents-to-talk-to-each-other-and-complete-your-tasks/ |archive-date=2024-12-27 |access-date=2025-01-14 |website=VentureBeat |language=en-US}} and OpenAI Swarm.{{Cite web |date=2025-01-13 |title=The next AI wave — agents — should come with warning labels |url=https://www.computerworld.com/article/3727412/the-next-ai-wave-agents-should-come-with-warning-labels.html |archive-url=https://web.archive.org/web/20250114023632/https://www.computerworld.com/article/3727412/the-next-ai-wave-agents-should-come-with-warning-labels.html |archive-date=2025-01-14 |access-date=2025-01-14 |website=Computerworld |language=en}}

Companies such as Google, Microsoft and Amazon Web Services have offered platforms for deploying pre-built AI agents.{{Cite web |last=David |first=Emilia |date=2025-04-15 |title=Moveworks joins AI agent library craze |url=https://venturebeat.com/ai/moveworks-joins-ai-agent-library-craze/ |archive-url=https://web.archive.org/web/20250415214729/https://venturebeat.com/ai/moveworks-joins-ai-agent-library-craze/ |archive-date=2025-04-15 |access-date=2025-05-14 |website=VentureBeat |language=en-US}}

Proposed protocols for standardizing inter-agent communication include the Agent Protocol (by LangChain), the Model Context Protocol (by Anthropic), AGNTCY,{{Cite web |last=David |first=Emilia |date=2025-03-06 |title=A standard, open framework for building AI agents is coming from Cisco, LangChain and Galileo |url=https://venturebeat.com/ai/a-standard-open-framework-for-building-ai-agents-is-coming-from-cisco-langchain-and-galileo/ |archive-url=https://web.archive.org/web/20250309045209/https://venturebeat.com/ai/a-standard-open-framework-for-building-ai-agents-is-coming-from-cisco-langchain-and-galileo/ |archive-date=2025-03-09 |access-date=2025-04-02 |website=VentureBeat |language=en-US}} Gibberlink,{{Cite web |last=Zeff |first=Maxwell |date=2025-03-05 |title=GibberLink lets AI agents call each other in robo-language |url=https://techcrunch.com/2025/03/05/gibberlink-lets-ai-agents-call-each-other-in-robo-language/ |archive-url=https://web.archive.org/web/20250305141006/https://techcrunch.com/2025/03/05/gibberlink-lets-ai-agents-call-each-other-in-robo-language/ |archive-date=2025-03-05 |access-date=2025-04-02 |website=TechCrunch |language=en-US}} the Internet of Agents,{{Cite web |last=Cooney |first=Michael |date=2025-01-30 |title=Cisco touts 'Internet of Agents' for secure AI agent collaboration |url=https://www.networkworld.com/article/3812618/cisco-touts-internet-of-agents-for-secure-ai-agent-collaboration.html |archive-url=https://web.archive.org/web/20250131133538/https://www.networkworld.com/article/3812618/cisco-touts-internet-of-agents-for-secure-ai-agent-collaboration.html |archive-date=2025-01-31 |access-date=2025-04-02 |website=Network World |language=en}} and Agent2Agent (by Google).{{Cite web |last=Clark |first=Lindsay |date=2025-04-10 |title=Did someone say AI agents, Google asks, bursting in |url=https://www.theregister.com/2025/04/10/google_agentic_ai_cloud_next/ |archive-url=https://web.archive.org/web/20250410112802/https://www.theregister.com/2025/04/10/google_agentic_ai_cloud_next/ |archive-date=2025-04-10 |access-date=2025-05-14 |website=The Register}} Software frameworks for addressing agent reliability include AgentSpec, ToolEmu, GuardAgent, Agentic Evaluations, and predictive models from H2O.ai.{{Cite web |last=David |first=Emilia |date=2025-03-28 |title=New approach to agent reliability, AgentSpec, forces agents to follow rules |url=https://venturebeat.com/ai/new-approach-to-agent-reliability-agentspec-forces-agents-to-follow-rules/ |archive-url=https://web.archive.org/web/20250412120324/https://venturebeat.com/ai/new-approach-to-agent-reliability-agentspec-forces-agents-to-follow-rules/ |archive-date=2025-04-12 |access-date=2025-05-14 |website=VentureBeat |language=en-US}}

In February 2025, Hugging Face released Open Deep Research, an open source version of OpenAI Deep Research.{{Cite web |last=Edwards |first=Benj |date=2025-02-05 |title=Hugging Face clones OpenAI's Deep Research in 24 hours |url=https://arstechnica.com/ai/2025/02/after-24-hour-hackathon-hugging-faces-ai-research-agent-nearly-matches-openais-solution/ |archive-url=https://web.archive.org/web/20250206125754/https://arstechnica.com/ai/2025/02/after-24-hour-hackathon-hugging-faces-ai-research-agent-nearly-matches-openais-solution/ |archive-date=2025-02-06 |access-date=2025-04-02 |website=Ars Technica |language=en-US}} Hugging Face also released a free web browser agent, similar to OpenAI Operator.{{Cite web |last=Wiggers |first=Kyle |date=2025-05-06 |title=Hugging Face releases a free Operator-like agentic AI tool |url=https://techcrunch.com/2025/05/06/hugging-face-releases-a-free-operator-like-agentic-ai-tool/ |archive-url=https://web.archive.org/web/20250506221518/https://techcrunch.com/2025/05/06/hugging-face-releases-a-free-operator-like-agentic-ai-tool/ |archive-date=2025-05-06 |access-date=2025-05-14 |website=TechCrunch |language=en-US}}

Galileo published on Hugging Face a leadership board for agents, which ranks their performance based on their underlying LLMs.{{Cite web |last=Ortiz |first=Sabrina |date=2025-02-14 |title=Which AI agent is the best? This new leaderboard can tell you |url=https://www.zdnet.com/article/which-ai-agent-is-the-best-this-new-leaderboard-can-tell-you/ |archive-url=https://web.archive.org/web/20250330001709/https://www.zdnet.com/article/which-ai-agent-is-the-best-this-new-leaderboard-can-tell-you/ |archive-date=2025-03-30 |access-date=2025-04-02 |website=ZDNET |language=en}}

A non-peer reviewed research survey of 67 agents released by the end of 2024 found that the majority of agents are built by developers based in the United States, built by companies, purposed for coding or computer interaction, have code or documentation, and lack safety policies or evaluations.{{cite arXiv |eprint=2502.01635 |last1=Casper |first1=Stephen |last2=Bailey |first2=Luke |last3=Hunter |first3=Rosco |last4=Ezell |first4=Carson |last5=Cabalé |first5=Emma |last6=Gerovitch |first6=Michael |last7=Slocum |first7=Stewart |last8=Wei |first8=Kevin |last9=Jurkovic |first9=Nikola |last10=Khan |first10=Ariba |last11=Christoffersen |first11=Phillip J. K. |last12=Pinar Ozisik |first12=A. |last13=Trivedi |first13=Rakshit |last14=Hadfield-Menell |first14=Dylan |last15=Kolt |first15=Noam |title=The AI Agent Index |date=2025 |class=cs.SE }}

A non-peer-reviewed paper by researchers at CSIRO lists software frameworks for monitoring agents as they are being used in production, and proposes a taxonomy of concepts relevant to AgentOps.{{cite arXiv | eprint=2411.05285 | last1=Dong | first1=Liming | last2=Lu | first2=Qinghua | last3=Zhu | first3=Liming | title=AgentOps: Enabling Observability of LLM Agents | date=2024 | class=cs.AI }}

== Autonomous capabilities ==

The Financial Times compared the autonomy of AI agents to the SAE classification of self-driving cars, comparing most applications to Level 2 or Level 3, with some achieving Level 4 in highly specialized circumstances, and Level 5 being theoretical.{{Cite news |last=Colback |first=Lucy |date=2025-05-07 |title=AI agents: from co-pilot to autopilot |url=https://www.ft.com/content/3e862e23-6e2c-4670-a68c-e204379fe01f |archive-url=https://archive.today/20250507031905/https://www.ft.com/content/3e862e23-6e2c-4670-a68c-e204379fe01f |archive-date=2025-05-07 |access-date=2025-05-14 |work=Financial Times}}

== Applications ==

As of April 2025, per the Associated Press, there are few real world applications of AI agents.{{Cite web |date=2025-04-30 |title=Visa wants to give artificial intelligence 'agents' your credit card |url=https://apnews.com/article/ai-artificial-intelligence-5dfa1da145689e7951a181e2253ab349 |archive-url=https://web.archive.org/web/20250501010808/https://apnews.com/article/ai-artificial-intelligence-5dfa1da145689e7951a181e2253ab349 |archive-date=2025-05-01 |access-date=2025-05-14 |website=Associated Press |language=en}}

A recruiter for the Department of Government Efficiency proposed in April 2025 to use AI agents to automate the work of about 70,000 United States federal government employees, as part of a startup with funding from OpenAI and a partnership agreement with Palantir. This proposal was criticized by experts for its impracticality, if not impossibility, and the lack of corresponding widespread adoption by businesses.{{Cite magazine |last=Haskins |first=Caroline |date=2025-05-02 |title=A DOGE Recruiter Is Staffing a Project to Deploy AI Agents Across the US Government |url=https://www.wired.com/story/doge-recruiter-ai-agents-palantir-clown-emoji/ |archive-url=https://web.archive.org/web/20250503074840/https://www.wired.com/story/doge-recruiter-ai-agents-palantir-clown-emoji/ |archive-date=2025-05-03 |access-date=2025-05-14 |magazine=Wired |language=en-US |issn=1059-1028}}

== Proposed benefits ==

Proponents argue that AI agents can increase personal and economic productivity,{{Cite web |last=Piper |first=Kelsey |date=2024-03-29 |title=AI "agents" could do real work in the real world. That might not be a good thing. |url=https://www.vox.com/future-perfect/24114582/artificial-intelligence-agents-openai-chatgpt-microsoft-google-ai-safety-risk-anthropic-claude |archive-url=https://web.archive.org/web/20241219213538/https://www.vox.com/future-perfect/24114582/artificial-intelligence-agents-openai-chatgpt-microsoft-google-ai-safety-risk-anthropic-claude |archive-date=2024-12-19 |access-date=2025-01-14 |website=Vox |language=en-US}} foster greater innovation,{{Cite news |last=Purdy |first=Mark |date=2024-12-12 |title=What Is Agentic AI, and How Will It Change Work? |url=https://hbr.org/2024/12/what-is-agentic-ai-and-how-will-it-change-work |archive-url=https://archive.today/20241230071722/https://hbr.org/2024/12/what-is-agentic-ai-and-how-will-it-change-work |archive-date=2024-12-30 |access-date=2025-01-20 |work=Harvard Business Review |issn=0017-8012}} and liberate users from monotonous tasks.{{Cite web |last=Wright |first=Webb |date=2024-12-12 |title=AI Agents with More Autonomy Than Chatbots Are Coming. Some Safety Experts Are Worried |url=https://www.scientificamerican.com/article/what-are-ai-agents-and-why-are-they-about-to-be-everywhere/ |archive-url=https://web.archive.org/web/20241223010402/https://www.scientificamerican.com/article/what-are-ai-agents-and-why-are-they-about-to-be-everywhere/ |archive-date=2024-12-23 |access-date=2025-01-14 |website=Scientific American |language=en}} A Bloomberg opinion piece by Parmy Olson argued that agents are best suited for narrow, repetitive tasks with low risk.{{Cite web |last=Olson |first=Parmy |author-link=Parmy Olson |date=2025-01-27 |title=Skip the Hype, Here's How AI 'Agents' Can Really Help |url=https://www.bloomberg.com/opinion/articles/2025-01-27/skip-the-hype-here-s-how-ai-agents-can-really-help |archive-url=https://archive.today/20250127052332/https://www.bloomberg.com/opinion/articles/2025-01-27/skip-the-hype-here-s-how-ai-agents-can-really-help |archive-date=2025-01-27 |access-date=2025-04-02 |website=Bloomberg News}} Conversely, researchers suggest that agents could be applied to web accessibility for people who have disabilities,{{cite arXiv |eprint=2306.06070 |last1=Deng |first1=Xiang |last2=Gu |first2=Yu |last3=Zheng |first3=Boyuan |last4=Chen |first4=Shijie |last5=Stevens |first5=Samuel |last6=Wang |first6=Boshi |last7=Sun |first7=Huan |last8=Su |first8=Yu |title=Mind2Web: Towards a Generalist Agent for the Web |date=2023 |class=cs.CL }}{{Cite web |last=Woodall |first=Tatyana |date=2024-01-09 |title=Researchers developing AI to make the internet more accessible |url=https://news.osu.edu/researchers-developing-ai-to-make-the-internet-more-accessible/ |archive-url=https://web.archive.org/web/20250328092959/https://news.osu.edu/researchers-developing-ai-to-make-the-internet-more-accessible/ |archive-date=2025-03-28 |access-date=2025-04-02 |website=Ohio State News |language=en-us}} and researchers at Hugging Face propose that agents could be used for coordinating resources such as during disaster response.{{Cite web |last1=Mitchell |first1=Margaret |author-link1=Margaret Mitchell (scientist) |last2=Ghosh |first2=Avijit |last3=Luccioni |first3=Sasha |author-link3=Sasha Luccioni |last4=Pistilli |first4=Giada |date=2025-03-24 |title=Why handing over total control to AI agents would be a huge mistake |url=https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake/ |archive-url=https://web.archive.org/web/20250324115123/https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake/ |archive-date=2025-03-24 |access-date=2025-04-02 |website=MIT Technology Review |language=en}}

== Concerns ==

Potential concerns include issues of liability, an increased risk of cybercrime, ethical challenges, as well as problems related to AI safety and AI alignment. Other issues involve data privacy.{{Cite web |last=O'Neill |first=Brian |date=2024-12-18 |title=What is an AI agent? A computer scientist explains the next wave of artificial intelligence tools |url=https://theconversation.com/what-is-an-ai-agent-a-computer-scientist-explains-the-next-wave-of-artificial-intelligence-tools-242586 |archive-url=https://web.archive.org/web/20250104000722/https://theconversation.com/what-is-an-ai-agent-a-computer-scientist-explains-the-next-wave-of-artificial-intelligence-tools-242586 |archive-date=2025-01-04 |access-date=2025-01-14 |website=The Conversation |language=en-US}}{{Cite web |last=Zittrain |first=Jonathan L. |date=2024-07-02 |title=We Need to Control AI Agents Now |url=https://www.theatlantic.com/technology/archive/2024/07/ai-agents-safety-risks/678864/ |archive-url=https://web.archive.org/web/20241231080834/https://www.theatlantic.com/technology/archive/2024/07/ai-agents-safety-risks/678864/ |archive-date=2024-12-31 |access-date=2025-01-20 |website=The Atlantic |language=en}}{{Cite web |last=Kerner |first=Sean Michael |date=2025-01-16 |title=Nvidia tackles agentic AI safety and security with new NeMo Guardrails NIMs |url=https://venturebeat.com/ai/nvidia-boosts-agentic-ai-safety-with-nemo-guardrails-promising-better-protection-with-low-latency/ |archive-url=https://web.archive.org/web/20250116161332/https://venturebeat.com/ai/nvidia-boosts-agentic-ai-safety-with-nemo-guardrails-promising-better-protection-with-low-latency/ |archive-date=2025-01-16 |access-date=2025-01-20 |website=VentureBeat |language=en-US}}{{Cite web |date=2025-01-27 |title=The argument against AI agents and unnecessary automation |url=https://www.theregister.com/2025/01/27/ai_agents_automate_argument/ |archive-url=https://web.archive.org/web/20250127193228/https://www.theregister.com/2025/01/27/ai_agents_automate_argument/ |archive-date=2025-01-27 |access-date=2025-01-30 |website=The Register}}{{Cite web |last=Balevic |first=Katie |title=Signal president warns the hyped agentic AI bots threaten user privacy |url=https://www.businessinsider.com/signal-president-warns-privacy-threat-agentic-ai-meredith-whittaker-2025-3 |archive-url=https://web.archive.org/web/20250312185602/https://www.businessinsider.com/signal-president-warns-privacy-threat-agentic-ai-meredith-whittaker-2025-3 |archive-date=2025-03-12 |access-date=2025-04-02 |website=Business Insider |language=en-US}} Additional challenges include weakened human oversight, algorithmic bias,{{Cite news |last=Lin |first=Belle |date=2025-01-06 |title=How Are Companies Using AI Agents? Here's a Look at Five Early Users of the Bots |url=https://www.wsj.com/articles/how-are-companies-using-ai-agents-heres-a-look-at-five-early-users-of-the-bots-26f87845 |archive-url=https://archive.today/20250106123337/https://www.wsj.com/articles/how-are-companies-using-ai-agents-heres-a-look-at-five-early-users-of-the-bots-26f87845 |archive-date=2025-01-06 |access-date=2025-01-20 |work=The Wall Street Journal |language=en-US |issn=0099-9660}} and compounding software errors, as well as issues related to the explainability of agent decisions, security vulnerabilities, problems with underemployment, job displacement, and the potential for user manipulation,{{Cite magazine |last=Crawford |first=Kate |date=2024-12-23 |title=AI Agents Will Be Manipulation Engines |url=https://www.wired.com/story/ai-agents-personal-assistants-manipulation-engines/ |archive-url=https://web.archive.org/web/20250103053608/https://www.wired.com/story/ai-agents-personal-assistants-manipulation-engines/ |archive-date=2025-01-03 |access-date=2025-01-14 |magazine=Wired |language=en-US |issn=1059-1028}} misinformation or malinformation. They may also complicate legal frameworks, foster hallucinations, hinder countermeasures against rogue agents, and suffer from the lack of standardized evaluation methods. They have also been criticized for being expensive and having a negative impact on internet traffic and potentially the environment.

Journalists have described AI agents as part of a push by Big Tech companies to "automate everything".{{Cite web |last=Wong |first=Matteo |date=2025-03-14 |title=Was Sam Altman Right About the Job Market? |url=https://www.theatlantic.com/technology/archive/2025/03/generative-ai-agents/682050/ |archive-url=https://web.archive.org/web/20250317115042/https://www.theatlantic.com/technology/archive/2025/03/generative-ai-agents/682050/ |archive-date=2025-03-17 |access-date=2025-04-02 |website=The Atlantic |language=en |quote=In other words, flawed products won't stop tech companies' push to automate everything—the AI-saturated future will be imperfect at best, but it is coming anyway.}} Several CEOs of those companies have stated in early 2025 that they expect AI agents to eventually "join the workforce".{{Cite web |last=Agarwal |first=Shubham |title=Carnegie Mellon staffed a fake company with AI agents. It was a total disaster. |url=https://www.businessinsider.com/ai-agents-study-company-run-by-ai-disaster-replace-jobs-2025-4 |archive-url=https://web.archive.org/web/20250428031158mp_/https://www.businessinsider.com/ai-agents-study-company-run-by-ai-disaster-replace-jobs-2025-4 |archive-date=2025-04-28 |access-date=2025-05-15 |website=Business Insider |language=en-US}}{{Cite web |last=Sabin |first=Sam |date=2025-04-22 |title=Exclusive: Anthropic warns fully AI employees are a year away |url=https://www.axios.com/2025/04/22/ai-anthropic-virtual-employees-security |archive-url=https://web.archive.org/web/20250423000910/https://www.axios.com/2025/04/22/ai-anthropic-virtual-employees-security |archive-date=2025-04-23 |access-date=2025-05-15 |website=Axios |language=en}} However, in a non-peer-reviewed study, Carnegie Mellon University researchers tested the behavior of agents in a simulated software company and found that none of the agents could complete a majority of the assigned tasks.{{cite arXiv | eprint=2412.14161 | last1=Xu | first1=Frank F. | last2=Song | first2=Yufan | last3=Li | first3=Boxuan | last4=Tang | first4=Yuxuan | last5=Jain | first5=Kritanjali | last6=Bao | first6=Mengxue | last7=Wang | first7=Zora Z. | last8=Zhou | first8=Xuhui | last9=Guo | first9=Zhitong | last10=Cao | first10=Murong | last11=Yang | first11=Mingyang | author12=Hao Yang Lu | last13=Martin | first13=Amaad | last14=Su | first14=Zhe | last15=Maben | first15=Leander | last16=Mehta | first16=Raj | last17=Chi | first17=Wayne | last18=Jang | first18=Lawrence | last19=Xie | first19=Yiqing | last20=Zhou | first20=Shuyan | last21=Neubig | first21=Graham | title=TheAgentCompany: Benchmarking LLM Agents on Consequential Real World Tasks | date=2024 | class=cs.CL }}

Yoshua Bengio warned at the 2025 World Economic Forum that "all of the catastrophic scenarios with AGI or superintelligence happen if we have agents".

In March 2025, Scale AI signed a contract with the United States Department of Defense to work with them, in collaboration with Anduril Industries and Microsoft, to develop and deploy AI agents for the purpose of assisting the military with "operational decision-making."{{Cite web |last=Hornstein |first=Julia |title=AI agents are coming to the military. VCs love it, but researchers are a bit wary. |url=https://www.businessinsider.com/ai-agents-coming-military-new-scaleai-contract-2025-3 |archive-url=https://web.archive.org/web/20250312101554/https://www.businessinsider.com/ai-agents-coming-military-new-scaleai-contract-2025-3 |archive-date=2025-03-12 |access-date=2025-04-02 |website=Business Insider |language=en-US}} Researchers have expressed concerns that agents and the large language models they are based on could be biased towards aggressive foreign policy decisions.{{Cite web |last=Tangermann |first=Victor |date=2025-03-06 |title=Pentagon Signs Deal to "Deploy AI Agents for Military Use" |url=https://futurism.com/pentagon-signs-deal-deploy-ai-agents-military-use |archive-url=https://web.archive.org/web/20250308022255/https://futurism.com/pentagon-signs-deal-deploy-ai-agents-military-use |archive-date=2025-03-08 |access-date=2025-04-02 |website=Futurism}}{{Cite web |last=Jensen |first=Benjamin |date=2025-03-04 |title=The Troubling Truth About How AI Agents Act in a Crisis |url=https://foreignpolicy.com/2025/03/04/ai-bias-national-security-study/ |archive-url=https://archive.today/20250304114949/https://foreignpolicy.com/2025/03/04/ai-bias-national-security-study/ |archive-date=2025-03-04 |access-date=2025-04-02 |website=Foreign Policy |language=en-US}}

Research-focused agents have the risk of consensus bias and coverage bias due to collecting information available on the public Internet.{{Cite web |last=Nuñez |first=Michael |date=2025-02-25 |title=OpenAI expands Deep Research access to Plus users, heating up AI agent wars with DeepSeek and Claude |url=https://venturebeat.com/ai/openai-expands-deep-research-access-to-plus-users-heating-up-ai-agent-wars-with-deepseek-and-claude/ |archive-url=https://web.archive.org/web/20250311120439/https://venturebeat.com/ai/openai-expands-deep-research-access-to-plus-users-heating-up-ai-agent-wars-with-deepseek-and-claude/ |archive-date=2025-03-11 |access-date=2025-04-02 |website=VentureBeat |language=en-US}} NY Mag unfavorably compared the user workflow of agent-based web browsers to Amazon Alexa, which was "software talking to software, not humans talking to software pretending to be humans to use software."{{Cite web |last=Herrman |first=John |date=2025-01-25 |title=What Are AI 'Agents' For? |url=https://nymag.com/intelligencer/article/what-are-ai-agents-like-openai-operator-for.html |archive-url=https://web.archive.org/web/20250125112442/https://nymag.com/intelligencer/article/what-are-ai-agents-like-openai-operator-for.html |archive-date=2025-01-25 |access-date=2025-04-02 |website=Intelligencer |language=en}}

Agents have been linked to the Dead Internet Theory due to their ability to both publish and engage with online content.{{Cite web |last=Caramela |first=Sammi |date=2025-02-01 |title='Dead Internet Theory' Is Back Thanks to All of That AI Slop |url=https://www.vice.com/en/article/dead-internet-theory-is-back-thanks-to-all-of-that-ai-slop/ |archive-url=https://web.archive.org/web/20250201192805/https://www.vice.com/en/article/dead-internet-theory-is-back-thanks-to-all-of-that-ai-slop/ |archive-date=2025-02-01 |access-date=2025-04-02 |website=VICE |language=en-US}}

Agents may get stuck in infinite loops.{{Cite news |last1=Metz |first1=Cade |last2=Weise |first2=Karen |date=2023-10-16 |title=How 'A.I. Agents' That Roam the Internet Could One Day Replace Workers |url=https://www.nytimes.com/2023/10/16/technology/ai-agents-workers-replace.html |archive-url=https://archive.today/20231219182907/https://www.nytimes.com/2023/10/16/technology/ai-agents-workers-replace.html |archive-date=2023-12-19 |access-date=2025-04-02 |work=The New York Times |language=en-US |issn=0362-4331}}

=== Possible mitigation ===

Zico Kolter noted the possibility of emergent behavior as a result of interactions between agents, and proposed research in game theory to model the risks of these interactions.{{Cite magazine |last=Knight |first=Will |date=2025-04-09 |title=The AI Agent Era Requires a New Kind of Game Theory |url=https://www.wired.com/story/zico-kolter-ai-agents-game-theory/ |archive-url=https://web.archive.org/web/20250409202024/https://www.wired.com/story/zico-kolter-ai-agents-game-theory/ |archive-date=2025-04-09 |access-date=2025-05-15 |magazine=Wired |language=en-US |issn=1059-1028}}

Guardrails, defined by Business Insider as "filters, rules, and tools that can be used to identify and remove inaccurate content" have been suggested to help reduce errors.{{Cite web |last=Varanasi |first=Lakshmi |title=Don't get too excited about AI agents yet. They make a lot of mistakes. |url=https://www.businessinsider.com/ai-agents-errors-hallucinations-compound-risk-2025-4 |archive-url=https://web.archive.org/web/20250418101155/https://www.businessinsider.com/ai-agents-errors-hallucinations-compound-risk-2025-4 |archive-date=2025-04-18 |access-date=2025-05-15 |website=Business Insider |language=en-US}}

Applications

{{Undue weight section|date=September 2023}}

The concept of agent-based modeling for self-driving cars was discussed as early as 2003.{{Cite conference |last1=Yang |first1=Guoqing |last2=Wu |first2=Zhaohui |last3=Li |first3=Xiumei |last4=Chen |first4=Wei |date=2003 |title=SVE: embedded agent-based smart vehicle environment |url=https://ieeexplore.ieee.org/document/1252782 |book-title=Proceedings of the 2003 IEEE International Conference on Intelligent Transportation Systems |volume=2 |pages=1745–1749 |doi=10.1109/ITSC.2003.1252782 |isbn=0-7803-8125-4 |s2cid=110177067|url-access=subscription }}

Hallerbach et al. explored the use of agent-based approaches for developing and validating automated driving systems. Their method involved a digital twin of the vehicle under test and microscopic traffic simulations using independent agents.{{cite journal |last1=Hallerbach |first1=S. |last2=Xia |first2=Y. |last3=Eberle |first3=U. |last4=Koester |first4=F. |title=Simulation-Based Identification of Critical Scenarios for Cooperative and Automated Vehicles |journal=SAE International Journal of Connected and Automated Vehicles |volume=1 |issue=2 |page=93 |date=2018 |publisher=SAE International |doi=10.4271/2018-01-1066 |url=https://www.researchgate.net/publication/324194968}}

Waymo developed a multi-agent simulation environment called Carcraft, to test algorithms for self-driving cars.{{cite news |last1=Madrigal |first1=Story by Alexis C. |title=Inside Waymo's Secret World for Training Self-Driving Cars |url=https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/ |access-date=14 August 2020 |work=The Atlantic}}{{cite journal |last1=Connors |first1=J. |last2=Graham |first2=S. |last3=Mailloux |first3=L. |title=Cyber Synthetic Modeling for Vehicle-to-Vehicle Applications |journal=In International Conference on Cyber Warfare and Security |date=2018 |page=594-XI |publisher=Academic Conferences International Limited}} This system simulates interactions between human drivers, pedestrians, and automated vehicles. Artificial agents replicate human behavior using real-world data.

Salesforce's Agentforce is an agentic AI platform that allows for the building of autonomous agents to perform tasks.{{Cite web |last=Nuñez |first=Michael |date=2025-03-05 |title=Salesforce launches Agentforce 2dx, letting AI run autonomously across enterprise systems |url=https://venturebeat.com/ai/salesforce-launches-agentforce-2dx-pushing-autonomous-ai-deep-into-enterprise-workflows/ |access-date=2025-04-24 |website=VentureBeat |language=en-US}}{{Cite web |title=Salesforce unveils Agentforce to help create autonomous AI bots |url=https://www.cio.com/article/3518646/salesforce-unveils-agentforce-to-help-create-autonomous-ai-bots.html |access-date=2025-04-24 |website=CIO |language=en}}

The Transport Security Administration is integrating agentic AI into new technologies, including machines to authenticate passenger identities using biometrics and photos, and also for incident response.{{Cite web |date=2025-01-23 |title=TSA Showcase Biometric AI-powered Airport Immigration Security |url=https://techinformed.com/tsa-ces-2025-biometric-ai-security-innovations-immigration/ |access-date=2025-04-24 |website=techinformed.com |language=en-US}}

See also

{{div col|colwidth=30em}}

{{div col end}}

Notes

{{Notelist}}

Inline references

{{Reflist}}

Other references

  • {{cite book

| last = Domingos | first = Pedro | author-link = Pedro Domingos

| title = The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

| date = September 22, 2015

| publisher = Basic Books

| isbn = 978-0465065707

}}

  • {{Russell Norvig 2003 |at=Chapter 2 |mode=cs1}}
  • {{cite journal |last=Kasabov |first=N. |year=1998 |pages=453–454 |title=Introduction: Hybrid intelligent adaptive systems |journal=International Journal of Intelligent Systems |volume=13 |issue=6 |doi=10.1002/(SICI)1098-111X(199806)13:6<453::AID-INT1>3.0.CO;2-K |s2cid=120318478 |doi-access=free }}
  • {{cite book |last=Weiss |first=G. |year=2013 |title=Multiagent systems |edition=2nd |location=Cambridge, MA |publisher=MIT Press |isbn=978-0-262-01889-0 }}

{{Artificial intelligence navbox}}

{{Authority control}}

{{DEFAULTSORT:Intelligent Agent}}

Category:Artificial intelligence