The document Confidence in Autonomous and Agentic Systems, developed by Capgemini's AI Futures Lab for organizations and professionals seeking to implement these systems, analyzes the emerging role of artificial intelligence systems capable of making decisions and acting independently. These systems mark a paradigm shift: instead of programming step-by-step solutions, people can simply pose a problem for the system to solve autonomously. This approach represents a profound transformation in the relationship between humans and technology.
The text explains how this evolution has occurred, from early chatbots with limited functions to current multi-agent systems. Through this progression, we observe greater action capacity and more complex integration. The key difference lies not just in the number of agents, but in the new properties that emerge when several collaborate in shared environments.
One of the document's main contributions is the conceptual framework based on three essential properties: autonomy, agency, and authority. Autonomy refers to the system's capacity to make decisions without human intervention. Agency involves the ability to execute those decisions and modify the environment. Authority defines the limits within which the agent can act. These concepts allow evaluation of system functioning from design to implementation, illustrated with human analogies to facilitate understanding.
Another central element is the role of world models, which are internal representations that agents build about the environment in which they operate. These models enable agents to understand context, anticipate consequences, and act coherently. A limited world model leads to unreliable decisions, while a well-structured model improves system utility and fosters trust in its behavior.
The document also examines how systems composed of multiple agents function and the dimensions affecting their behavior, such as size, complexity, degree of specialization, and organizational form. It further analyzes what an agent can actually do, differentiating between generalist or specialist systems, and between those with predictable behaviors and those that can adapt to new situations.
Another key aspect is the role of language models in these architectures. While there's a tendency to think that an LLM is an agent, the document clarifies that these models usually fulfill interpretation functions, such as translating natural language, but don't make decisions or execute actions by themselves.
Finally, it raises the importance of clearly defining each system's objectives and ensuring their behavior aligns with those purposes. Poor definition can generate unexpected or even harmful results. The text concludes that designing and integrating these types of systems requires not only technical capabilities, but also a rigorous approach in terms of governance, ethics, and organizational context.
27/01/2026
Essay by Dario Amodei analyzing the main risks of increasingly powerful AI systems: from unpredictable autonomous behaviors to biological weapons, ...
22/01/2026
Foundational document defining Claude's values, behaviors, and conceptual framework, Anthropic's AI model. It establishes principles of safety, ...
21/01/2026
Deloitte's "State of AI in the Enterprise 2026" report analyzes how organizations are moving from AI experimentation to large-scale implementation. ...
15/01/2026
Global analysis of corporate AI investment and strategy in 2026. Companies double their AI investment, CEOs take leadership of digital ...