May 23, 2011 - Foreign media outlet TechCrunch reported today that the Anthropic At the first developer event, Code with Claude, held in San Francisco.CEO Dario Amodei says that AI models nowgenerate"Hallucinations"The frequency may be lower than that of humans.

Note: The term "illusion" refers to AI fictional content that is presented as fact.
Amodei emphasized that the AI hallucinations would not prevent Anthropic from moving forward. AGI The goal. "It depends on what metrics you use, but theI suspect that AI models may hallucinate less frequently than humans, it's just that they go wrong in more unexpected ways. "
Amodei has always been one of the industry's most positive voices on AGI. He says, "People are always trying to find the 'upper limit' of what AI can do, but there's no such limit in sight."
However, not everyone agrees. Google DeepMind CEO Hassabis pointed out this week that current AI models are "full of holes" and can even answer some basic questions incorrectly.
There are also indications that when dealing with complex reasoning tasks, theSome of the newer models are rather more disillusionedFor example, OpenAI's o3 and o4-mini versions have higher illusion rates than previous inference models. For example, OpenAI's o3 and o4-mini versions have higher illusion rates than previous inference models, and even OpenAI itself can't figure out why.
Amodei also mentioned thatHumans themselves are often wrong.He believes that AI is wrong. Therefore, he believes that AI is wrongIt doesn't mean it's "not smart enough.". But he also confessed thatThe AI's ability to speak misinformation with a high degree of confidence does tend to cause problems.