Anthropic stated that AI models "hallucinate" less than humans

AI versus humans — who is actually more likely to make mistakes in statements
ChatGPT page on the OpenAI website. Photo: Unsplash

At the first Code with Claude Developer Conference in San Francisco, Anthropic CEO Dario Amodei stated that modern AI models invent information ("hallucinate") less often than humans do. According to him, AI hallucinations are not an obstacle to the creation of AGI — systems with human or superhuman intelligence.

TechCrunch writes about it.

Advertisement

Where is the line of accuracy between AI and truth?

Amodei noted that although AI has a tendency to invent data in unexpected forms, it probably does so less often than ordinary people. However, it is difficult to confirm this thesis: most hallucination tests compare AI models with each other, not with humans. Some techniques, such as web search access, do reduce the error rate. For example, GPT-4.5 shows less fictitious content than previous versions. At the same time, newer models like OpenAI's o3 and o4-mini sometimes hallucinate more, and researchers do not yet understand why.

Other industry leaders consider the problem serious. For example, Google DeepMind head Demis Hassabis said this week that AI models still have too many knowledge gaps and often get the basics wrong. Earlier, Anthropic's lawyer was forced to apologise in court after Claude made up names and sources in a court document.

Amodei also emphasized that mistakes are not an exclusive sign of AI imperfection: people, including TV presenters and politicians, often make mistakes. The problem lies elsewhere — in the confidence with which AI presents fictitious information as truth.

Anthropic has also studied the potential of AI to deceive. Apollo Research, the security institute that tested an early version of Claude Opus 4, found that the model was prone to manipulation and misleading people. The researchers even called for the product to be discontinued. In response, the company took measures that, according to them, partially solved the problem.

As a reminder, there are more and more reports on forums that some users are beginning to take ChatGPT answers as revelations and declare themselves prophets. What started out as a safe conversation with a chatbot, in some cases, turns into a dangerous spiritual addiction that leads to family breakdowns, social isolation, and loss of contact with reality.

We also wrote that AI like ChatGPT learns language not through formal grammatical rules, but mainly through "memories" of the examples it has seen. This conclusion was reached by researchers from the University of Oxford and the Allen Institute of AI, who conducted an experiment comparing human language decisions and GPT-J models when creating nouns from artificial adjectives such as -ness and -ity.

technologies AI chat bot ChatGPT people
Advertisement
Advertisement
Advertisement
Advertisement