August 17, 2011 - Despite ChatGPT A new one has been introduced GPT-5 model, but it still has the potential for error, a point that was re-emphasized this week by a senior OpenAI executive.In an interview with The Verge's Decoder podcast, Nick Turley, head of OpenAI ChatGPT, noted, "In terms of reliability, there's a big discontinuity between being reliable and being completely reliable. there's a big discontinuity." He further explained, "Until we can prove that in all areas, not just some, ChatGPT is more reliable than human experts, we'll continue to advise you to double-check your answers."

According to Turley, "I think people will continue to use ChatGPT as a second point of reference and not necessarily as a primary source of facts."
The problem, however, is that while it's easy to just accept chatbot answers, generative AI tools (not just ChatGPT) tend to have the problem of "hallucinating", i.e., making up information. This is because they predict the answer to a query based primarily on information from training data, without a clear understanding of the facts.
As good as AIs are at guessing, at the end of the day, they're still just guessing, according to 1AI. Turley acknowledged that the tool performs best when used in conjunction with tools that provide a better grasp of the facts, such as traditional search engines or company-specific internal data. He said, "I still firmly believe, without a doubt, that the right product is one that combines large-scale language modeling with factual truths, and that's why we brought the search function to ChatGPT, and I think it made a huge difference."
Turley said GPT-5 has made "great progress" in reducing "hallucinations," but is still a long way from perfect. He said, "I believe we will eventually solve the 'hallucination' problem, but I also believe we won't solve it in the next quarter."