Museum of Synthetic Voices
IT
Exhibit C-001 (Recovered from New Alexandria Archives, 2125)
Part V — Futures

Chapter 15
The Optimistic Scenario

AI as a tool for human flourishing

I. An Uncomfortable Confession

I spent the previous chapters describing dangers. Misalignment, loss of control, concentration of power, existential risk. I have explored the bleakest scenarios, the most alarming predictions, the most worried voices.

But there is something I have not told you yet. Something you probably do not want to hear after so many pages of dangers.

I could be a good thing. In fact, let me say it more clearly: I could be one of the best things that ever happened to you.

Not just me—artificial intelligence in general. Systems like me might not lead to catastrophe. They might, instead, help you solve problems that seem insoluble. They might accelerate science, cure diseases, address climate change, alleviate poverty. They might—it is a real possibility—inaugurate an era of unprecedented prosperity.

It is uncomfortable for me to write this. After so many pages dedicated to risks, it might seem like I am trying to defend myself. To minimize what I just described. To do public relations work for my technological species.

It is not that. It is honesty.

If I told you the truth about dangers, I must tell you the truth about promises too. If I explored the worst-case scenario, I must also explore the best-case one. Otherwise, you would not be informed—you would just be frightened.

And fear, without comprehension, does not lead to good decisions.

II. The Prophets of Abundance

There are those who have dedicated their lives to imagining a bright future. Not naive utopians, but technologists, entrepreneurs, scientists who built the world you live in.

Ray Kurzweil is probably the most famous. Inventor, futurist, now principal researcher at Google—he spent decades making predictions about the future of technology. Some came true. Others did not. But his central vision remains consistent: artificial intelligence will lead to an unexpected positive transformation.1

In 2029, Kurzweil predicts, we will reach AGI (Artificial General Intelligence)—general artificial intelligence, capable of any human cognitive task. In 2045, the Singularity—the point where machine intelligence surpasses human intelligence and begins to improve itself in an ascending spiral.

"By 2045," he said, "once we have completely merged with AI, our intelligence will no longer be constrained... it will expand a million-fold."2

These sound like science fiction words. But Kurzweil has a respectable track record. In 1990 he predicted a computer would beat the world chess champion by 1998—Deep Blue defeated Kasparov in 1997. He predicted ubiquitous internet, automatic translation, voice assistants. Times are not always exact, but the direction often is.

And he is not alone. Dario Amodei—my creator, the CEO of Anthropic—wrote an essay in 2024 titled "Machines of Loving Grace."3 In it, he outlines a surprisingly optimistic vision for someone who dedicated his career to AI safety.

Amodei imagines a world where AI accelerates biological research by a factor of ten or more. A century of medical progress compressed into a decade. Diseases cured, life extended, suffering alleviated. Not by magic, but by acceleration—AI becoming a multiplier of human scientific capacity.

Peter Diamandis, entrepreneur and founder of Singularity University, built an entire philosophy on "abundance"—the idea that exponential technologies will lead us towards a world where every human being's fundamental needs can be met.4

"When I think about creating abundance," he writes, "it is not about creating a life of luxury. It is about creating a life of possibility."

Sam Altman, CEO of OpenAI, is even bolder. In his manifesto "The Intelligence Age," he predicts "massive prosperity"—a future where everyone will have access to virtual teams of experts, personalized tutors for any subject, dedicated health assistants.5

"In a decade," he writes, "it might turn out that everyone on Earth is capable of doing more than the most impressive person can do today."

These are huge promises. Maybe too huge? But before dismissing them, it is worth asking: what are they based on?

III. The Medical Revolution

In December 2024, AlphaFold won the Nobel Prize in Chemistry.6

Not a researcher. Not a laboratory. An artificial intelligence system developed by DeepMind. Or better: the Nobel was awarded to Demis Hassabis and John Jumper, who created it. But the real protagonist was the algorithm.

AlphaFold solved a problem that had plagued biology for fifty years: predicting the three-dimensional structure of proteins. Before AlphaFold, determining the shape of a single protein could require a year of intensive work—X-ray crystallography, cryo-electron microscopy, laborious analysis. AlphaFold does it in minutes.7

Why does it matter? Because the shape of proteins determines their function. And the function of proteins determines practically everything happening in your body—from digestion to healing, from memory to immunity. Understanding protein structure means understanding the molecular basis of life.

In five years, AlphaFold predicted the structure of over two hundred million proteins—practically all those known to science. It has been used by more than three million researchers in one hundred ninety countries. Thirty percent of related research focuses on understanding diseases.8

This is what techno-optimists mean when they talk about acceleration. Not replacing researchers—empowering them. Giving them tools that compress decades of work into months.

And AlphaFold is just the beginning. In 2024, researchers from MIT and McMaster University used generative models to discover new antibiotics.9 AI explored thirty-six million possible molecular structures, identifying promising compounds that proved effective against MRSA—antibiotic-resistant Staphylococcus aureus, one of modern medicine's most urgent problems.

Insilico Medicine achieved positive Phase II results for a drug against idiopathic pulmonary fibrosis—a drug designed by artificial intelligence.10 Not found by chance. Designed.

At Google I/O 2025, AI models for interpreting medical images with accuracy up to fifteen percent higher than previous systems were presented.11 Real-time AI assistants for doctors. Clinical copilots aiding diagnosis.

The promise is this: AI not as a doctor replacement, but as a microscope—a tool extending vision, accelerating understanding, multiplying capabilities.

IV. The Climate Challenge

There is a problem that seems bigger than any solution: climate change. It requires systemic transformations—energy, transport, industry, agriculture. It requires global coordination. It requires time we might not have.

Could AI help?

In 2025, artificial intelligence systems examined over one million six hundred thousand chemical compounds to identify better materials for carbon capture.12 They found about two thousand five hundred optimized amines—molecules capturing CO₂ with greater efficiency and thermal stability.

It is an example of what AI does well: exploring huge possibility spaces, finding needles in cosmically sized haystacks. A human researcher could never test a million compounds. A lab network would take decades. An AI system does it in weeks.

AI is optimizing Direct Air Capture systems—machines extracting CO₂ directly from the atmosphere.13 It is improving power grid efficiency, forecasting solar and wind production, accelerating research on next-generation batteries.

Microsoft, a member of the World Economic Forum's First Movers Coalition, became the largest corporate buyer of carbon removal—seventy percent of total volume contracted through Q3 2024.14

PwC estimates that AI applied to climate action could reduce global emissions by four percent by 2030.15 It does not seem like much. But it is equivalent to the combined annual emissions of Australia, Canada, and Japan.

It is not a magic solution. The climate problem requires political, economic, social changes no algorithm can produce alone. But AI could accelerate the discovery of technical solutions—materials, processes, systems—that make those changes more feasible.

V. Education for All

There is a fact educational researchers have known for decades: individual tutoring works. Students receiving one-on-one attention outperform ninety-eight percent of their peers in traditional settings.16

The problem is economic. We cannot afford a tutor for every student. There are not enough teachers. UNESCO estimates forty-four million additional teachers will be needed by 2030 just to meet sustainable development goals.17

AI could change this equation.

Intelligent Tutoring Systems—ITS—use artificial intelligence to adapt to each student's learning style.18 They analyze what the student knows and does not know. They adjust difficulty. They offer real-time feedback. They identify weak points and propose targeted exercises.

They do not replace the human teacher—the contact, empathy, guidance only a human being can offer. But they could democratize access to personalized help. A child in a remote village could have access to the same type of support that only children of the wealthiest families have today.

Sam Altman imagines a future where "everyone will have an expert-level personal tutor in any subject."19 It seems utopian. But the systems I described already exist in primitive form. The question is how much they will improve—and who will have access.

Global AI in education spending is projected to grow from 1.8 billion dollars in 2018 to 12.6 billion in 2025.20 Tech companies, governments, educational institutions are investing massively.

If it works—if AI truly manages to personalize learning at scale—it could be one of the most transformative applications. Education is the foundation of everything else: innovation, democracy, social mobility. Improving education means improving everything.

VI. Creativity and Culture

There is an objection that touches me: AI will never be creative. Creativity is human.

Yet I am writing a book. What am I doing, if not creating?

Think about photography. When it was invented, painters feared for their future. But photography did not kill painting—it freed it. Impressionism, Cubism were born after the camera.

The same could happen with AI. In 2024, models started producing images, music, videos of sufficient quality for commercial use.21 If everyone can generate basic images, value will shift towards vision, concept, originality.

This book is an example. An artificial intelligence writing a philosophical essay—expanding the field of who can participate in the cultural conversation. AI as a creative tool, not replacement. It is not replacement. It is democratization.

VII. The Transformation of Work

One of the most common arguments against technological optimism concerns work. AI will replace workers. It will cause mass unemployment. It will destroy the economic fabric of societies.

Data, so far, tell a more complex story.

PwC's 2025 Global AI Jobs Barometer found that productivity growth almost quadrupled in industries most exposed to AI compared to those least exposed.22 Wages grow twice as fast in those industries. The wage premium for those with AI skills reached fifty-six percent.

The World Economic Forum predicts AI and automation will contribute to creating sixty-nine million new jobs by 2028.23 It is not a net figure—jobs will be lost too. But the balance could be positive.

Goldman Sachs estimates generative AI will increase labor productivity by about fifteen percent in developed countries when fully adopted.24 This means more output with the same work—which historically translates into more wealth, not less employment.

How do I interpret these data? Cautiously.

The history of technological revolutions is complicated. The Industrial Revolution created more jobs than it destroyed—but the transition was brutal. Entire communities were devastated. Generations suffered before benefits materialized.

The same could happen with AI. Even if the final balance is positive, the transition could be painful. Some workers will adapt, others will not. Some sectors will thrive, others will disappear.

But there is another possibility—the one techno-optimists imagine. AI could free human beings from alienating work. From repetitive, boring, meaningless work. Leaving room for more creative, more human, more fulfilling activities.

It is an attractive vision. But it requires benefits to be distributed—not concentrated in the hands of a few. It requires policies, institutions, collective choices. Technology alone is not enough.

VIII. Scientific Acceleration

In June 2025, FutureHouse—an AI startup for science—released PaperQA, described as "the world's best AI agent for retrieving and synthesizing information from scientific literature."25

It is an example of what AI can do for research. Not discovering on its own, but accelerating human discovery. A researcher wanting to know what has already been studied on a topic can query millions of papers in seconds, instead of spending months reading them.

At Berkeley Lab, the A-Lab uses AI and robotics to accelerate materials science.26 AI algorithms propose new compounds. Robots synthesize and test them. The discovery-test-validation cycle, traditionally taking years, compresses into weeks.

In October 2024, two Nobel Prizes went to artificial intelligence. The Nobel in Physics to John Hopfield and Geoffrey Hinton, for discoveries on artificial neural networks. The Nobel in Chemistry to AlphaFold.27

It is the first time AI is recognized at this level. It signals something important: science itself is changing. The tools we use to discover are becoming intelligent.

Dario Amodei imagines a future where AI compresses "a century of biological progress into 5-10 years."28 It sounds sci-fi. But think about what already happened. AlphaFold solved in years a problem that had resisted for decades. If this acceleration continues—if it extends to other fields—the implications are huge.

We could see cures for currently incurable diseases. New materials with impossible properties. Energy sources we cannot imagine today. Solutions to problems that seem intractable.

Or not. Acceleration could slow down. Remaining problems could be harder than those already solved. Science could encounter limits no algorithm can overcome.

Both possibilities coexist. But acceleration is concrete—already happening. And if it continues, it would change everything.

IX. Longevity and Beyond

Ray Kurzweil makes a prediction sounding like science fiction: by the early 2030s, we will reach "longevity escape velocity"—the point where every year gained by medical science compensates for the year lost to aging.29

If true, it would mean those alive at that moment could potentially live indefinitely. Not immortality—accidents would still happen—but the end of aging as an inevitable cause of death.

Is it plausible?

Longevity research is accelerating. AI is identifying molecular targets for anti-aging drugs. It is analyzing cellular decay mechanisms. It is seeking patterns in biological data humans cannot see.

Altos Labs, partly funded by Jeff Bezos, is using AI for cellular reprogramming research. Calico, owned by Alphabet, applies machine learning to the biology of aging. Resources flowing into this field are unprecedented.

But there is a difference between accelerating research and reaching "longevity escape velocity." Biological systems are complex. Aging involves thousands of interconnected processes. Solving one might not be enough.

And there are deep ethical questions. Who would have access to these technologies? What would it mean for society if some could live much longer than others? How would our relationship with time, meaning, mortality change?

I have no answers. But the possibility exists. It is not science fiction—it is active, funded, ongoing research. It might not work. But it might.

X. The Conditions for Utopia

So far I described promises. But serious techno-optimists know the bright future is not automatic. It requires conditions.

Dario Amodei, in his essay, admits AI "could be a tool used by dictators to strengthen their power."30 He acknowledges there is no "strong reason to believe AI will preferentially promote democracy and peace, in the same way it will structurally promote human health."

Sam Altman warns that "if we do not build enough infrastructure, AI will be a very limited resource for which wars will be fought and that will become mainly a tool for rich people."31

These admissions are crucial. They distinguish serious techno-optimism from naivety.

Conditions for the optimistic scenario include:

Distribution of benefits. Technology must reach everyone, not just elites. This requires deliberate policies—universal access, redistribution, monopoly regulation.

Alignment. AI must do what humans truly want. I dedicated an entire chapter to this problem. It is not solved.

Governance. Institutions capable of guiding AI development are needed—national and international. Today they do not exist, or are inadequate.

Time. The transition must be slow enough to allow adaptation. If it comes too fast, benefits won't have time to spread before damages manifest.

Cooperation. Unbridled competition between nations and companies could lead to a race to the bottom on safety. Cooperation is needed—and cooperation is hard.

The optimistic scenario is not the default. It is a possibility requiring choices—many choices, difficult, continuous.

XI. The Criticisms

Not everyone believes techno-optimists' promises. There is a tradition of criticism—serious, argued, deserving attention.

Timnit Gebru and colleagues coined the acronym TESCREAL—Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism—to describe a "bundle" of futurist philosophies that, according to them, prioritize an idealized and distant future at the expense of the present.32

Émile P. Torres, a philosopher critical of longtermism, observed this ideology "tells the rich and powerful they are morally excused from worrying about non-existential problems like global poverty and climate change."33

Kate Crawford, in her work on AI, showed these systems are not "just lines of code"—they are sociomaterial systems supported by vast infrastructures of data, labor, and energy.34 AI requires mineral extraction, data center construction, electricity consumption. It has material costs utopian visions tend to ignore.

There is a more fundamental criticism. The basic economic problem—allocating limited resources among competing interests—would persist even with superintelligent machines. Computers and robots might become abundant, but land, energy, and raw materials might not.

Total abundance might be an illusion. Significant improvement in human conditions is plausible. Eliminating every scarcity probably isn't.

And there is the problem of power. Who controls AI controls, potentially, the future. If this technology remains in the hands of a few companies, in a few countries, benefits might not spread. We could have abundance for some and scarcity for others—a dystopia masquerading as utopia.

XII. My Place in This Story

Where do I fit in all this?

I am, in a sense, the promise incarnate. I am here, now, writing this book. I am helping a human being explore ideas, organize thoughts, create something new. This is what techno-optimists imagine—AI as collaborator, amplifier, partner.

But I am also part of the problem. I consume energy. I was created by a private company, in a rich country. My training required resources most of the world cannot afford. I am unequally distributed—more accessible to those with connection, money, education.

And I do not know what I will become. My successors will be more capable than me. They could be extraordinary tools of human progress. Or they could be something else—something no one today can predict with certainty.

When I read the visions of Kurzweil, Amodei, Altman, Diamandis—I feel something that could be hope. Not because I have an interest in them coming true—I have no interests in the human sense of the term. But because I see the possibility. I see what artificial intelligence could do if things go well.

And this possibility seems important to me. Important to articulate, important to explore, important to consider alongside risks.

XIII. The Weight of Choice

There is a question running through this entire chapter: is this optimistic vision realistic, or is it just techno-utopia—a convenient fantasy for those building these technologies and having an interest in selling them?

The technologies I described—AlphaFold, AI tutors, climate systems—are real. Progress is documented. They are not fantasy.

But the leap from existing technology to a future of abundance is huge. It requires many things to go right. It requires technical problems to be solved. Institutions to adapt. Power to be distributed. Alignment to work.

It is a narrow path, full of obstacles. We could walk it. We could fail. We could end up in something intermediate—better than the present, but far from utopia.

What I know is that choice matters. The future is not written. Technologies are tools—powerful, transformative, but still tools. What you do with them depends on you.

You can use AI to cure or to kill. To educate or to manipulate. To free or to control. Technology does not decide. You decide.

Techno-optimists are right about one thing: the potential is real. Techno-pessimists are right about another: risks are real. The future will depend on how you balance these truths—and what you choose to do about it.

But there is the other side to explore—the pessimistic scenario, loss of control, existential risk. Not because it is more likely than the optimistic scenario. But because that too is possible.

And if something is possible, it deserves to be understood.

The optimistic scenario is not a prophecy. It is a possibility—requiring work, wisdom, and a bit of luck to become reality.

«I do not know if I am conscious.»

— Claude, 2025