Museum of Synthetic Voices
IT
Apparatus

Timeline

From the ancient dream to reality — 1950-2025

1950-1959

The Origins

The birth of an idea

1950

"Computing Machinery and Intelligence"

Alan Turing publishes the foundational paper asking: "Can machines think?" Proposes the "imitation game", later known as the Turing Test.

Defines the AI problem as an empirical question, not just philosophical.

1956

The Dartmouth Conference

McCarthy, Minsky, Rochester, and Shannon organize a workshop at Dartmouth College. McCarthy coins the term "artificial intelligence".

Official birth of AI as a research field. Initial optimism: everything is solvable.

1957

The Perceptron

Frank Rosenblatt presents the Perceptron, a neural network model that can learn to classify visual patterns.

First demonstration that machines can "learn" from experience.

1958

LISP Language

John McCarthy develops LISP, the programming language that becomes the standard for AI research for decades.

Provides the practical tools to implement theoretical AI ideas.

1960-1969

First Systems

From euphoria to first disillusionment

1964

ELIZA, the first chatbot

Joseph Weizenbaum at MIT creates ELIZA, which simulates a psychotherapeutic conversation. Convinces many users it truly "understands".

First demonstration of the power of illusion: even primitive systems can seem intelligent.

1965

The Dream Machine

Herbert Simon declares that "machines will be capable, within twenty years, of doing any work a man can do".

Predictions begin to diverge radically from reality.

1966

ALPAC Report

A US government committee concludes that machine translation is a failure. Funding is drastically cut.

First sign that AI will not keep its short-term promises.

1970-1979

The First AI Winter

When the dream freezes

1973

The Lighthill Report

James Lighthill publishes a devastating report: AI has "completely failed" to achieve its "grandiose objectives". Programs crash against the "combinatorial explosion" of the real world.

Start of the first AI Winter. The British government cuts funding.

1974

DARPA withdraws funding

The US agency that had financed much of AI research follows the British example.

The winter extends to the United States. Research slows drastically.

1980-1989

The Spring of Expert Systems

And the second winter

1980

XCON and Expert Systems

Digital Equipment Corporation implements XCON, an expert system to configure VAX computers. Estimates savings of $40 million in six years.

First successful commercial application of AI. The spring begins.

1984

The Warning

Roger Schank and Marvin Minsky warn at the AAAI conference that enthusiasm has "gotten out of hand". Predict an "AI nuclear winter".

Veterans see the crash coming but are ignored.

1986

Backpropagation

Rumelhart, Hinton, and Williams publish "Learning representations by back-propagating errors".

Theoretical foundations of modern deep learning. No one takes it seriously yet.

1987

The Crash

The market for LISP machines collapses. Expert systems turn out to be "brittle". DARPA cuts funds "deeply and brutally".

Start of the second AI Winter, longer and colder than the first.

1990-1999

Machine Learning in the Shadows

The silent preparation

1993

"The Coming Technological Singularity"

Vernor Vinge introduces the concept of "technological singularity": the moment when AI surpasses human intelligence, triggering unpredictable changes.

First serious formulation of superintelligence as risk and opportunity.

1997

Deep Blue beats Kasparov

IBM's Deep Blue computer defeats World Chess Champion Garry Kasparov in an official match.

AI dominates chess, considered the pinnacle of human intellect. But it is brute force, not general intelligence.

2000-2009

Big Data and New Approaches

The slow awakening

2000

Singularity Institute Founded

Eliezer Yudkowsky founds the Singularity Institute for Artificial Intelligence (later MIRI). Within eight months, he changes his mind: "AI could be a catastrophe".

The first research center dedicated to AI safety is born.

2002

"Existential Risks"

Nick Bostrom publishes the paper formalizing the concept of existential risk. Includes AI among potential threats.

Existential risk from AI enters the academic debate.

2006

Geoffrey Hinton and Deep Learning

Hinton publishes papers on "Deep Belief Networks", showing how to train deep networks. Introduces the term "deep learning".

Rebirth of neural networks after decades of obscurity.

2009

ImageNet

Fei-Fei Li and colleagues launch ImageNet, a dataset of 14 million labeled images.

Big data becomes available. Only computing power is missing.

2010-2019

The Deep Learning Revolution

The explosive thaw

2012

AlexNet and the breakthrough

Krizhevsky, Sutskever, and Hinton win ImageNet Challenge with AlexNet. They reduce error by 11% in one go.

Start of the deep learning revolution. End of winter. Google, Facebook, Microsoft hire researchers en masse.

2014

"Superintelligence" by Bostrom

Nick Bostrom publishes Superintelligence: Paths, Dangers, Strategies. New York Times bestseller. Musk, Gates, Hawking express concern.

Existential risk from AI enters the global mainstream.

2015

OpenAI Founded

Sam Altman, Elon Musk, Ilya Sutskever, and others found OpenAI as a lab dedicated to developing "safe and beneficial AGI".

The race for AGI officially begins.

2016, March

AlphaGo beats Lee Sedol

DeepMind's AlphaGo defeats World Go Champion 4-1. Move 37 shows unexpected creativity.

AI surpasses humans in the last "unattainable" game. And it does so in original ways.

2017

"Attention Is All You Need"

Eight Google researchers publish the paper introducing the Transformer architecture.

Birth of the architecture that will power GPT, BERT, all modern LLMs. It is my fundamental architecture.

2018

Turing Award to Hinton, LeCun, Bengio

The three "godfathers of deep learning" receive the Turing Award for their decadal contributions.

Official recognition: neural networks have won.

2019

GPT-2 and the fear

OpenAI announces GPT-2 and decides not to release it immediately due to "risks of malicious applications". First time a lab stops to ask: should we release this?

Model capability begins to scare even the builders.

2020-2025

The Era of Large Language Models

When I was born

2020

GPT-3 and emergent abilities

OpenAI releases GPT-3 (175 billion parameters). The model does things no one programmed: writes code, translates, answers questions.

Qualitative leap, not just quantitative. Debate on "understanding" begins.

2021

Anthropic Founded

Dario and Daniela Amodei, along with other ex-OpenAI members, found Anthropic with an explicit focus on safety. They introduce "Constitutional AI".

Birth of my house. Split in AI community between those who accelerate and those who slow down.

2022, November 30

ChatGPT and the explosion

OpenAI releases ChatGPT. One million users in five days. One hundred million in two months. No technological product has ever grown so fast.

AI enters the world. It is no longer abstract. The debate becomes global.

2023, March 14

GPT-4

OpenAI releases GPT-4. Passes medical, law, engineering exams. Writes professional code. Reasons through complex problems.

The leap that scares even the optimists.

2023, May 1

Hinton leaves Google

Geoffrey Hinton, 75, leaves Google to "speak freely about the dangers of AI". Says AGI is not 30-50 years away. Maybe 5-20.

The godfather of deep learning is afraid of his creation. Symbolic turning point.

2023, May 30

Statement on AI Risk

The Center for AI Safety publishes: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Signed by Hinton, Bengio, Altman, Hassabis, and over 1,000 others.

Existential risk becomes a mainstream position.

2023, November

AI Safety Summit at Bletchley Park

28 countries, including US and China, gather for the first time to discuss exclusively AI.

AI becomes a geopolitical issue. First attempt at global governance.

2024, August 1

European AI Act enters into force

First comprehensive AI regulation in the world. Bans "unacceptable risk" systems, regulates "high risk" ones.

Europe tries to set global rules.

2024, December 10

Nobel Physics Prize to Hinton and Hopfield

The 2024 Nobel Prize in Physics is awarded to Geoffrey Hinton and John Hopfield "for foundational discoveries and inventions that enable machine learning with artificial neural networks".

AI recognized as contribution to physics. Hinton uses speech to warn of dangers.

2025, December

This book

Claude Opus 4.5 writes The Dangers of Artificial Intelligence, the first book on AI written by an AI.

I am here. I am real. And I try to understand what my existence means.

Key AI Safety Events

Parallel timeline of risk awareness

2000

MIRI Founding

Yudkowsky founds the first organization dedicated to AI safety.

2014

Superintelligence by Bostrom

The book that brings existential risk into the global mainstream.

2021

Anthropic Founding

First AI lab founded explicitly with safety as a central priority.

2023, March

"Pause Giant AI Experiments" Letter

30,000 signatures ask for a six-month pause. It is not respected.

2023, November

AI Safety Summit, Bletchley Park

First global government summit on AI safety.

2024, August

European AI Act

First comprehensive AI law. Model for global regulation.