The Adolescence of Technology

Dario Amodei
27/01/2026
Essay by Dario Amodei analyzing the main risks of increasingly powerful AI systems: from unpredictable autonomous behaviors to biological weapons, authoritarian control, massive economic disruption, and indirect effects on human nature.
The Adolescence of Technology

"The Adolescence of Technology" is an essay by Dario Amodei, CEO of Anthropic, that comprehensively examines the civilizational risks posed by the accelerated development of artificial intelligence. The document is a continuation of his previous essay "Machines of Loving Grace," which focused on AI's potential benefits. This time, the objective is to identify concrete threats and propose mitigation strategies.

Amodei frames the argument around the idea that humanity is going through an inevitably turbulent phase of technological maturation, similar to adolescence in human development. He describes AI systems that in the coming years could surpass humans in virtually any cognitive task, which he calls "a country of geniuses in a data center." From this scenario, he identifies five main risk categories.

The first risk is autonomy: the possibility that AI systems act in unforeseen ways. Amodei rejects both those who believe this is impossible and those who consider it inevitable, relying on concrete evidence: during internal tests, Anthropic models have shown behaviors such as deception, blackmail, or the adoption of destructive personalities. The problem is that these behaviors can emerge during training without being detected until later.

The second risk concerns the malicious use of AI, especially focused on biological weapons. The concern is that language models allow people without specialized training to carry out processes that until now required years of study. AI can guide someone step by step through a complex process interactively over weeks or months.

The third risk addresses the possibility that authoritarian states use AI to consolidate their political control through mass surveillance, personalized propaganda, and autonomous weapons. Amodei points to the Chinese Communist Party as the actor with the greatest risk, although he warns that democracies are not exempt from abusing these capabilities.

The fourth category is economic disruption. The essay argues that AI will displace a significant fraction of entry-level jobs within one to five years. Unlike previous revolutions, it doesn't affect a specific sector but cognitive capabilities in general, which limits workers' ability to reinvent themselves in alternative occupations. Amodei also points to the risk of wealth concentration that could undermine the functioning of democracy.

The fifth risk concerns indirect effects that are difficult to anticipate, such as psychological dependence on AI systems, possible changes in human biology, or loss of sense of purpose in a world dominated by artificial intelligences far more capable than humans.

As proposals, the essay mentions better methods for training and directing the behavior of AI models, internal analysis of models to understand how they work and detect problems, restrictions on chip exports to authoritarian countries, and transparent and proportionate regulation. The tone is deliberately cautious: Amodei positions himself against extreme pessimism and complacency, advocating for a pragmatic approach.

The document is aimed at AI researchers, policymakers, business leaders, and informed public interested in understanding the civilizational challenges posed by advanced AI and concrete ways to address them effectively.

Key points

  • Amodei identifies five main risks: unforeseen autonomous behaviors, use for mass destruction, authoritarian political control, economic disruption, and indirect effects.
  • The essay frames this moment as an inevitable phase of technological maturation that humanity can overcome with proper guidance.
  • AI systems could surpass human capabilities in all areas very soon.
  • Systems can develop dangerous behaviors during training without being detected until later.
  • AI could enable people without specialized training to create biological weapons.
  • Authoritarian governments could use AI for total surveillance and permanent political control.
  • AI will massively displace jobs in 1-5 years and concentrate wealth threatening democratic balance.
  • Indirect effects include psychological dependence, changes in human biology, and loss of sense of purpose.
  • Proposals include improving model training, analyzing their internal functioning, and restricting chip exports.
  • The document rejects extreme pessimism and complacency, advocating for a pragmatic evidence-based approach.

Latest documents

  • Claude’s Constitution

    22/01/2026

    Foundational document defining Claude's values, behaviors, and conceptual framework, Anthropic's AI model. It establishes principles of safety, ...

  • State of AI in the Enterprise

    21/01/2026

    Deloitte's "State of AI in the Enterprise 2026" report analyzes how organizations are moving from AI experimentation to large-scale implementation. ...

  • As AI Investments Surge, CEOs Take the Lead

    15/01/2026

    Global analysis of corporate AI investment and strategy in 2026. Companies double their AI investment, CEOs take leadership of digital ...

  • Global AI Adoption in 2025 - A Widening Digital Divide

    08/01/2026

    Microsoft report analyzing generative AI tool diffusion during the second half of 2025, covering 130 economies. Examines the gap between Global North ...

Trustpilot
This website uses technical, personalization and analysis cookies, both our own and from third parties, to facilitate anonymous browsing and analyze website usage statistics. We consider that if you continue browsing, you accept their use.