According to Bloomberg, the world's leading experts in the tech field are competing to create "generalized artificial intelligence" (AGI), that is, AI that is comparable to humans on most intellectual tasks.

But today, industry leaders are quietly setting their sights on a new, more ambitious goal: "superintelligence" (Superintelligence). By definition, this AI is not only capable at the level of most people, but even outperforms all humans on all tasks.
While "superintelligence" is not a new concept in AI and has been around for decades, it has recently gained popularity among AI leaders as a more attainable goal.
Earlier this week, Meta CEO Mark Zuckerberg pushed "superintelligence" into the mainstream and put it at the center of his strategy to win the AI race. It's another major transition for Zuckerberg, following the "metaverse". He's throwing billions of dollars at developing advanced AI products, building new teams dedicated to "superintelligence," and offering millions of dollars in salaries to lure top researchers to join him in this ambitious mission.
Zuckerberg isn't the only one taking a big gamble on superintelligence. Ilya Sutskever, considered one of the most brilliant AI researchers of his generation, left OpenAI and named his new company Safe Superintelligence.
OpenAI CEO Sam Altman has also begun to refer to "superintelligence" frequently. Microsoft recently described its latest breakthroughs in medical diagnostics as "a step toward medical superintelligence".
Joshua Bengio, a professor of computer science at the University of Montreal who has been dubbed one of the "godfathers of AI," says that while there is no lack of commercial motivation for companies to pursue AI buzzwords, it is a "scientific fact" that humans are moving toward "superintelligence. "scientific fact."
Bengio said, "We have witnessed AI systems that can communicate with each other in over 200 languages and pass PhD qualifying exams in all disciplines." He argues that while AI currently lacks long-term planning and strategic capabilities, "the gap is closing at an exponential rate and this is becoming a trend."
"Superintelligence" will bring exciting business opportunities, but if all goes according to plan, there are serious risks that come with it.
"The problem with superintelligence is that the person who controls it will have immense power over others." Bengio said. His new nonprofit research lab, LawZero, aims to develop an AI that is safer than the current mainstream model. "And it may not be humans who control it," he added, "it may also be superintelligences that act on their own will."
Nevertheless, many in the AI industry still question the vague definition and overhype of the term "superintelligence".
First, "superintelligence" is as ill-defined as AGI: what kind of task-specific capabilities does an AI need to achieve to cross the threshold from "general" to "super"? (Is it at the undergraduate level? PhD level? PhD? Nobel Prize winner?)
Even more ambiguous is the question of when we will achieve "superintelligence". After all, there's no clear consensus that AGI will ever be realized. Altman has said that superintelligence is "very close", while Anthropic CEO Dario Amodei has predicted that AI that outperforms Nobel Prize winners in most areas could emerge as early as 2026 or 2027.
Of course, this prediction may be overly optimistic. The realization of the so-called "superintelligence" of human beings may be decades away, if ever.
Miles Brundage, an AI policy researcher who left OpenAI last year, sees a number of reasons behind the rise of the term "superintelligence". One explanation, he says, is so-called "AGI inflation" - given that many people feel that AGI is closer than ever, the move to newer terminology could be a "logical iteration".
Brenda adds that "superintelligence" can also be seen as "a response to the overly broad meaning of the term 'AGI'", and that in some cases one is referring to the "the category at the top end of these definitions".
Another reason why "superintelligence" has taken the industry by storm lately may be that a more ambitious goal than AGI (plus huge investments) could help attract the scarce top AI research talent that wants to tackle only the most ambitious technical problems.
"The criteria for defining these terms are always changing," says Deedy Das, a partner at Menlo Ventures. He says he prefers to think of it in terms of an "economic Turing test": Hire a human to do the same job as an AI, and if there's no significant difference in output, "is that superintelligence? I don't know, but it seems like a reasonable and objectively defined goal."
It is now indisputable that the best-funded and most technologically advanced companies are moving toward a goal that is both more tantalizing and more frightening than AGI.
But for the general public, life is business as usual. As Altman recently put it, "Humanity is on the verge of creating digital superintelligence, but so far everything is far less weird than it seems."