{"id":31374,"date":"2025-03-24T09:57:44","date_gmt":"2025-03-24T01:57:44","guid":{"rendered":"https:\/\/www.1ai.net\/?p=31374"},"modified":"2025-03-24T09:57:44","modified_gmt":"2025-03-24T01:57:44","slug":"%e5%9b%be%e7%81%b5%e5%a5%96%e5%be%97%e4%b8%bb%e6%9d%a8%e7%ab%8b%e6%98%86%ef%bc%9a%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b%e5%8f%91%e5%b1%95%e5%b7%b2%e6%8e%a5%e8%bf%91%e7%93%b6%e9%a2%88%ef%bc%8c-2","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/31374.html","title":{"rendered":"Turing Award Winner Likun Yang: Development of Large Language Models is Approaching a Bottleneck, AI Cannot Achieve Human-Level Intelligence by Relying on Text Training Only"},"content":{"rendered":"<p>Turing Award winner and Meta Chief <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai\" title=\"[View articles tagged with [AI]]\" target=\"_blank\" >AI<\/a> the scientist<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%9d%a8%e7%ab%8b%e6%98%86\" title=\"_Other Organiser\" target=\"_blank\" >Yang Likun<\/a>\uff08Yann LeCun was a guest on the 20th episode of the Big Technology Podcast, talking about why generative AI is so important today.<strong>Difficult to make scientific discoveries<\/strong>and how AI will evolve in the future.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-31375\" title=\"5d88a5b1j00stlurc003wd000sg00imp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/5d88a5b1j00stlurc003wd000sg00imp.jpg\" alt=\"5d88a5b1j00stlurc003wd000sg00imp\" width=\"1024\" height=\"670\" \/><\/p>\n<p>He said,<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large language model]]\" target=\"_blank\" >Large Language Model<\/a>Existing AI technologies such as<strong>Text-based training and generating answers through statistical regularity<\/strong>The \"can't.\"<strong>create something new<\/strong>\", and therefore has limitations. Humans, on the other hand, are able to use common sense and mental models to think about and solve new problems, an ability that AI macromodels do not have. \"They are simply trained on large amounts of textual data for retrieval and generation and lack the ability to understand and reason abstractly about the physical world.\"<\/p>\n<p>He considers the current development of large language modeling<strong>It's nearing a bottleneck.<\/strong>The diminishing returns from the growth of training data, further acquisition of data<strong>Not only is it costly, but it is difficult to achieve the desired results<\/strong>. Simply by scaling up a large language model and training more data<strong>Unable to achieve human-level AI<\/strong>, because the big models lack real reasoning power and understanding of the physical world.<\/p>\n<p>Likun Yang said, \"<strong>Real AI<\/strong>\u201d<strong>Need to understand the physical world, have a long-lasting memory, and support reasoning and planning<\/strong>.<\/p>\n<p>As 1AI previously reported, Likun Yang predicted in February this year that AI technology will usher in a further revolution by 2030. However, the current AI system is still limited, and it is difficult for existing technology to support household robots and self-driving cars.<\/p>\n<p>Likun Yang is working on a new system that aims to help AI \"understand\" reality by building a model to predict the behavior of the physical world. \"AI can't compete with humans yet. If we can develop a system that is as smart as a cat or a mouse, that would be considered a big step forward.\"<\/p>","protected":false},"excerpt":{"rendered":"<p>Yann LeCun, Turing Award winner and Meta's chief AI scientist, was a guest on the \"Big Technology Podcast\", which aired on the 20th, and talked about why current generative AI is difficult to make scientific discoveries and how AI will develop in the future. He said that existing AI technologies, such as big language models, are essentially based on text training and generate answers through statistical laws, and cannot \"create new things\", so there are limitations. Humans are able to use common sense and mental models to think and solve new problems, an ability that AI models do not have. \"They are trained to retrieve and generate on large amounts of textual data and lack the ability to understand the physical world and reason abstractly.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[411,706,5346],"collection":[],"class_list":["post-31374","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-706","tag-5346"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/31374","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=31374"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/31374\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=31374"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=31374"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=31374"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=31374"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}