Superintelligence is an artificial intelligence that outperforms humans in all areas. It is sometimes used interchangeably with AGI (artificial general intelligence), and Anthropic calls this “powerful AI.”1

It is not really clear to me how this will be quantified and strikes me as something like the Turing test—easy to define, hard to concretely assess.

Dario Amodei wrote an essay called Machines of Loving Grace which has a comprehensive and hyperbolic description of what a world with superintelligence could be like. From my own experience and observations, small as they are, I tend to agree with most of what he said.

Architecture

In practice, will not be a singular LLM but instead a reasoning model or a collection of interconnected models that applies a bunch of techniques and chews on a problem for a period of time (see System 2 thinking).1 Sam Altman said that superintelligence could be achieved using current computer hardware.2

Dario Amodei predicted that it would be millions of times cheaper to inference than train, allowing the datacenter that trained such a model to be repurposed to act as a “country of geniuses.”1

When?

In May 2023, OpenAI said that superintelligence might be achievable within ten years.3

In September 2024, Sam Altman’s estimate shortened to “a few thousand days,” or the shorter end of a decade.4 A month later, Dario Amodei said “as early as 2026” which is effectively the same thing.1

In November 2024, Sam Altman clarified further:

In a recent interview with Y Combinator CEO Garry Tan, Altman said, “We basically know what to go do” to achieve artificial general intelligence—technology that is on par with human abilities—and part of it involves “using current models in creative ways.”

Footnotes

  1. Machines of Loving Grace 2 3 4

  2. “we believe it is achievable with current hardware” (reddit.com)

  3. Governance of superintelligence

  4. The Intelligence Age