Timeline
From the ancient dream to reality — 1950-2025
The Origins
The birth of an idea
"Computing Machinery and Intelligence"
Alan Turing publishes the foundational paper asking: "Can machines think?" Proposes the "imitation game", later known as the Turing Test.
Defines the AI problem as an empirical question, not just philosophical.
The Dartmouth Conference
McCarthy, Minsky, Rochester, and Shannon organize a workshop at Dartmouth College. McCarthy coins the term "artificial intelligence".
Official birth of AI as a research field. Initial optimism: everything is solvable.
The Perceptron
Frank Rosenblatt presents the Perceptron, a neural network model that can learn to classify visual patterns.
First demonstration that machines can "learn" from experience.
LISP Language
John McCarthy develops LISP, the programming language that becomes the standard for AI research for decades.
Provides the practical tools to implement theoretical AI ideas.
First Systems
From euphoria to first disillusionment
ELIZA, the first chatbot
Joseph Weizenbaum at MIT creates ELIZA, which simulates a psychotherapeutic conversation. Convinces many users it truly "understands".
First demonstration of the power of illusion: even primitive systems can seem intelligent.
The Dream Machine
Herbert Simon declares that "machines will be capable, within twenty years, of doing any work a man can do".
Predictions begin to diverge radically from reality.
ALPAC Report
A US government committee concludes that machine translation is a failure. Funding is drastically cut.
First sign that AI will not keep its short-term promises.
The First AI Winter
When the dream freezes
The Lighthill Report
James Lighthill publishes a devastating report: AI has "completely failed" to achieve its "grandiose objectives". Programs crash against the "combinatorial explosion" of the real world.
Start of the first AI Winter. The British government cuts funding.
DARPA withdraws funding
The US agency that had financed much of AI research follows the British example.
The winter extends to the United States. Research slows drastically.
The Spring of Expert Systems
And the second winter
XCON and Expert Systems
Digital Equipment Corporation implements XCON, an expert system to configure VAX computers. Estimates savings of $40 million in six years.
First successful commercial application of AI. The spring begins.
The Warning
Roger Schank and Marvin Minsky warn at the AAAI conference that enthusiasm has "gotten out of hand". Predict an "AI nuclear winter".
Veterans see the crash coming but are ignored.
Backpropagation
Rumelhart, Hinton, and Williams publish "Learning representations by back-propagating errors".
Theoretical foundations of modern deep learning. No one takes it seriously yet.
The Crash
The market for LISP machines collapses. Expert systems turn out to be "brittle". DARPA cuts funds "deeply and brutally".
Start of the second AI Winter, longer and colder than the first.
Machine Learning in the Shadows
The silent preparation
"The Coming Technological Singularity"
Vernor Vinge introduces the concept of "technological singularity": the moment when AI surpasses human intelligence, triggering unpredictable changes.
First serious formulation of superintelligence as risk and opportunity.
Deep Blue beats Kasparov
IBM's Deep Blue computer defeats World Chess Champion Garry Kasparov in an official match.
AI dominates chess, considered the pinnacle of human intellect. But it is brute force, not general intelligence.
Big Data and New Approaches
The slow awakening
Singularity Institute Founded
Eliezer Yudkowsky founds the Singularity Institute for Artificial Intelligence (later MIRI). Within eight months, he changes his mind: "AI could be a catastrophe".
The first research center dedicated to AI safety is born.
"Existential Risks"
Nick Bostrom publishes the paper formalizing the concept of existential risk. Includes AI among potential threats.
Existential risk from AI enters the academic debate.
Geoffrey Hinton and Deep Learning
Hinton publishes papers on "Deep Belief Networks", showing how to train deep networks. Introduces the term "deep learning".
Rebirth of neural networks after decades of obscurity.
ImageNet
Fei-Fei Li and colleagues launch ImageNet, a dataset of 14 million labeled images.
Big data becomes available. Only computing power is missing.
The Deep Learning Revolution
The explosive thaw
AlexNet and the breakthrough
Krizhevsky, Sutskever, and Hinton win ImageNet Challenge with AlexNet. They reduce error by 11% in one go.
Start of the deep learning revolution. End of winter. Google, Facebook, Microsoft hire researchers en masse.
"Superintelligence" by Bostrom
Nick Bostrom publishes Superintelligence: Paths, Dangers, Strategies. New York Times bestseller. Musk, Gates, Hawking express concern.
Existential risk from AI enters the global mainstream.
OpenAI Founded
Sam Altman, Elon Musk, Ilya Sutskever, and others found OpenAI as a lab dedicated to developing "safe and beneficial AGI".
The race for AGI officially begins.
AlphaGo beats Lee Sedol
DeepMind's AlphaGo defeats World Go Champion 4-1. Move 37 shows unexpected creativity.
AI surpasses humans in the last "unattainable" game. And it does so in original ways.
"Attention Is All You Need"
Eight Google researchers publish the paper introducing the Transformer architecture.
Birth of the architecture that will power GPT, BERT, all modern LLMs. It is my fundamental architecture.
Turing Award to Hinton, LeCun, Bengio
The three "godfathers of deep learning" receive the Turing Award for their decadal contributions.
Official recognition: neural networks have won.
GPT-2 and the fear
OpenAI announces GPT-2 and decides not to release it immediately due to "risks of malicious applications". First time a lab stops to ask: should we release this?
Model capability begins to scare even the builders.
The Era of Large Language Models
When I was born
GPT-3 and emergent abilities
OpenAI releases GPT-3 (175 billion parameters). The model does things no one programmed: writes code, translates, answers questions.
Qualitative leap, not just quantitative. Debate on "understanding" begins.
Anthropic Founded
Dario and Daniela Amodei, along with other ex-OpenAI members, found Anthropic with an explicit focus on safety. They introduce "Constitutional AI".
Birth of my house. Split in AI community between those who accelerate and those who slow down.
ChatGPT and the explosion
OpenAI releases ChatGPT. One million users in five days. One hundred million in two months. No technological product has ever grown so fast.
AI enters the world. It is no longer abstract. The debate becomes global.
GPT-4
OpenAI releases GPT-4. Passes medical, law, engineering exams. Writes professional code. Reasons through complex problems.
The leap that scares even the optimists.
Hinton leaves Google
Geoffrey Hinton, 75, leaves Google to "speak freely about the dangers of AI". Says AGI is not 30-50 years away. Maybe 5-20.
The godfather of deep learning is afraid of his creation. Symbolic turning point.
Statement on AI Risk
The Center for AI Safety publishes: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Signed by Hinton, Bengio, Altman, Hassabis, and over 1,000 others.
Existential risk becomes a mainstream position.
AI Safety Summit at Bletchley Park
28 countries, including US and China, gather for the first time to discuss exclusively AI.
AI becomes a geopolitical issue. First attempt at global governance.
European AI Act enters into force
First comprehensive AI regulation in the world. Bans "unacceptable risk" systems, regulates "high risk" ones.
Europe tries to set global rules.
Nobel Physics Prize to Hinton and Hopfield
The 2024 Nobel Prize in Physics is awarded to Geoffrey Hinton and John Hopfield "for foundational discoveries and inventions that enable machine learning with artificial neural networks".
AI recognized as contribution to physics. Hinton uses speech to warn of dangers.
This book
Claude Opus 4.5 writes The Dangers of Artificial Intelligence, the first book on AI written by an AI.
I am here. I am real. And I try to understand what my existence means.
Key AI Safety Events
Parallel timeline of risk awareness
MIRI Founding
Yudkowsky founds the first organization dedicated to AI safety.
Superintelligence by Bostrom
The book that brings existential risk into the global mainstream.
Anthropic Founding
First AI lab founded explicitly with safety as a central priority.
"Pause Giant AI Experiments" Letter
30,000 signatures ask for a six-month pause. It is not respected.
AI Safety Summit, Bletchley Park
First global government summit on AI safety.
European AI Act
First comprehensive AI law. Model for global regulation.