{"id":23042,"date":"2024-11-14T09:41:40","date_gmt":"2024-11-14T01:41:40","guid":{"rendered":"https:\/\/www.1ai.net\/?p=23042"},"modified":"2024-11-14T09:41:40","modified_gmt":"2024-11-14T01:41:40","slug":"%e6%b6%88%e6%81%af%e7%a7%b0-openai%e3%80%81%e8%b0%b7%e6%ad%8c%e7%ad%89%e5%b7%a8%e5%a4%b4-ai-%e6%a8%a1%e5%9e%8b%e9%81%87%e7%93%b6%e9%a2%88%ef%bc%9a%e8%ae%ad%e7%bb%83%e6%95%b0%e6%8d%ae%e9%9a%be%e5%af%bb","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/23042.html","title":{"rendered":"OpenAI, Google and other giants' AI models hit bottlenecks: training data hard to find, high costs, sources say"},"content":{"rendered":"<p>According to Bloomberg, a number of organizations, including <a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a>,<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a>and Anthropic among the giant AI companies developing more advanced <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [AI models]]\" target=\"_blank\" >AI Models<\/a>It has encountered bottlenecks and is facing the dilemma of \"diminishing returns\".<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/202405161743235068_19.jpg\" alt=\"EU invests \u20ac500 million to boost generative AI\" \/><\/p>\n<p>OpenAI's newest model, Orion, is reportedly underperforming in handling coding tasks, with no significant improvement over GPT-4. Google's upcoming Gemini software faces similar challenges, while Anthropic has delayed the launch of its highly anticipated Claude 3.5 Opus model.<\/p>\n<p>Industry experts noted that<strong>These challenges stem from the difficulty of finding \"new, unexploited, high-quality human-generated training data\" and the enormous cost of developing and operating both old and new models.<\/strong>.. Silicon Valley has long believed that more computing power, data, and larger models will inevitably lead to better performance and even the realization of generalized artificial intelligence (AGI), but this view may be based on false assumptions.<\/p>\n<p>To address these challenges, businesses are exploring alternative methods of<strong>Includes additional training after the initial training of the model is complete (improving responses and optimizing tone through human feedback) and development of AI tools (called agents) that can perform specific tasks<\/strong>, such as booking a flight or sending an email on behalf of a user.<\/p>\n<p>Margaret Mitchell, chief ethical scientist at AI startup Hugging Face, said, \"The AGI bubble is bursting, and it may take a different approach to training to make AI models perform well on a variety of tasks.\" Other experts have expressed similar sentiments.<\/p>","protected":false},"excerpt":{"rendered":"<p>According to Bloomberg, AI giants including OpenAI, Google and Anthropic are facing \"diminishing returns\" as they hit a bottleneck in developing more advanced AI models. OpenAI's newest model, Orion, reportedly struggled with coding tasks, with no significant improvement over GPT-4. Google's upcoming Gemini software faces similar challenges, while Anthropic has delayed the launch of its highly anticipated Claude 3.5 Opus model. According to industry experts, these challenges stem from the difficulty of finding \"new, untapped, high-quality human-generated training data\" and the difficulty of developing and operating both old and new models.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,190,281],"collection":[],"class_list":["post-23042","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-openai","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/23042","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=23042"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/23042\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=23042"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=23042"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=23042"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=23042"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}