From Machines of Loving Grace:

Quote

Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all. I think it could come as early as 2026

Quote

By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture,

Surprising position, but is a strong state about the scalability of the fundamentals and having a line of sight to something powerful.

Quote

it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.

Quote

The resources used to train the model can be repurposed to run millions of instances of it

Conversely, it will take a million times more resources to train than inference.

Quote

We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.

Quote

Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.

Great analogy that emphasizes the copilot, not autopilot, nature of AI.

Quote

I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.

This is no longer the copilot (maybe it can be limited to copilot though). I hope he defines a role for human biologists in this future, other than servants to the AI (he sort-of does).

Quote

a surprisingly large fraction of the progress in biology has come from a truly tiny number of discoveries, often related to broad measurement tools or techniques12 that allow precise but generalized or programmable intervention in biological systems. There’s perhaps ~1 of these major discoveries per year and collectively they arguably drive >50% of progress in biology.

Quote

First, these discoveries are generally made by a tiny number of researchers, often the same people repeatedly, suggesting skill and not random search

This is true, and it’s a powerfully compelling argument.

Quote

I’m actually open to the (perhaps absurd-sounding) idea that we could get 1000 years of progress in 5-10 years, but very skeptical that we can get 100 years in 1 year.

Agree 100%; I’ve been telling people that we will see the “singularity-like” advances from AI long before they happen. AGI won’t go from 0 to 100 overnight.

Quote

my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.

Quote

Below I try to make a list of what we might expect. This is not based on any rigorous methodology, and will almost certainly prove wrong in the details, but it’s trying to get across the general level of radicalism we should expect:

What’s missing: in the world’s wealthiest countries

It’s alarming that this topic vision doesn’t acknowledge that existing miracles of modern medicine haven’t made their way to huge swaths of the world’s population. This implies that AI will just widen existing gaps between the “haves” and “have nots.”

He does address this later, albeit with less clarity since he acknowledges that equity is a harder problem.

Quote

If AI further increases economic growth and quality of life in the developed world, while doing little to help the developing world, we should view that as a terrible moral failure and a blemish on the genuine humanitarian victories in the previous two sections.

Quote

I am somewhat skeptical that an AI could solve the famous “socialist calculation problem23 and I don’t think governments will (or should) turn over their economic policy to such an entity, even if it could do so. There are also problems like how to convince people to take treatments that are effective but that they may be suspicious of.

Real economies contain the effects of emergent behavior which is, by definition, unpredictable and stochastic. An AI cannot do better than humans at predicting unpredictable human behavior.

Quote

AI-driven plans for economic development need to reckon with corruption, weak institutions, and other very human challenges.

Quote

There could end up being bad feedback cycles where, for example, the people who are least able to make good decisions opt out of the very technologies that improve their decision-making abilities, leading to an ever-increasing gap and even creating a dystopian underclass (some researchers have argued that this will undermine democracy, a topic I discuss further in the next section).

Quote

AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.

Quote

My current guess at the best way to do this is via an “entente strategy”26, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”).

Quote

A superhumanly effective AI version of Popović (whose skills seem like they have high returns to intelligence) in everyone’s pocket, one that dictators are powerless to block or censor, could create a wind at the backs of dissidents and reformers across the world.

Quote

For example, historical hunter-gatherer societies might have imagined that life is meaningless without hunting and various kinds of hunting-related religious rituals, and would have imagined that our well-fed technological society is devoid of purpose.

Quote

it is very likely a mistake to believe that tasks you undertake are meaningless simply because an AI could do them better. Most people are not the best in the world at anything, and it doesn’t seem to bother them particularly much.

Quote

People do want a sense of accomplishment, even a sense of competition, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on research projects, try to become Hollywood actors, or found companies28

Quote

However, I do think in the long run AI will become so broadly effective and so cheap that this will no longer apply. At that point our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized.

Quote

that competition is self-defeating and tends to lead to a society based on compassion and cooperation. The “arc of the moral universe” is another similar concept.

I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side. Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t.