{"id":25934,"date":"2024-12-31T09:26:18","date_gmt":"2024-12-31T01:26:18","guid":{"rendered":"https:\/\/www.1ai.net\/?p=25934"},"modified":"2024-12-31T09:26:18","modified_gmt":"2024-12-31T01:26:18","slug":"meta-%e9%a6%96%e5%b8%ad%e7%a7%91%e5%ad%a6%e5%ae%b6%e6%9d%a8%e7%ab%8b%e6%98%86%ef%bc%9a%e5%ae%9e%e7%8e%b0-agi-%e6%9c%80%e4%b9%90%e8%a7%82%e9%9c%80%e8%87%b3%e5%b0%91%e4%ba%94%e5%88%b0%e5%85%ad%e5%b9%b4","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/25934.html","title":{"rendered":"Meta Chief Scientist Li-Kun Yang: At least five to six years are needed to realize AGI optimistically."},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/meta\" title=\"[View articles tagged with [Meta]]\" target=\"_blank\" >Meta<\/a> Chief Scientist, Turing Award Winner<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%9d%a8%e7%ab%8b%e6%98%86\" title=\"_Other Organiser\" target=\"_blank\" >Yang Likun<\/a>(Note: Yann LeCun, a Frenchman, spoke about his views on general artificial intelligence on the 29th edition of the \"Into the Impossible\" podcast.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-25935\" title=\"e397a37ej00spc3yt005hd000jc00agp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/e397a37ej00spc3yt005hd000jc00agp.jpg\" alt=\"e397a37ej00spc3yt005hd000jc00agp\" width=\"696\" height=\"376\" \/><\/p>\n<p>He said that the negative impact of AI is currently being<strong>over-amplification<\/strong>, whose capacity remains very limited at this time. \"In the most optimistic scenario.<a href=\"https:\/\/www.1ai.net\/en\/tag\/agi\" title=\"_OTHER ORGANISER\" target=\"_blank\" >AGI<\/a> The realization of AI will take at least 5-6 years.\" At present, society is generally worried about AI, and even some of the AI \"may lead to the end of\" the relevant views, Yang Likun believes that its<strong>Ignoring the actual state of development and potential positive impacts of AI<\/strong>.<\/p>\n<p>He said that AI is currently in<strong>Understanding and manipulating the physical world<\/strong>capacity in this area is still very limited, as it is mainly through the<strong>text data<\/strong>conducting training and lack of ability to visualize and understand the physical world.<strong>Unable to act like a human or an animal<\/strong>to interact naturally with the environment. \"For example, a 10-year-old child or a cat can interact with the environment through '<strong>Intuitive physics.<\/strong>' to understand how to interact with the physical world, like planning a jump trajectory or understanding the motion of an object. Current AI systems, on the other hand, do not yet have these capabilities.\"<\/p>\n<p>There are already many different views on AI and even AGI in the industry, some of which are listed below:<\/p>\n<ul>\n<li><strong>Microsoft AI CEO Mustafa Suleiman:<\/strong>Current hardware does not enable AGI in the future<strong>Two to five generations of hardware<\/strong>It's possible to realize it in the middle, but I don't think the probability of success in two years is very high. Depending on the hardware development cycle, each generation of hardware currently takes 18 to 24 months, so five generations could mean ten years. The current industry focus on AGI is a bit off the mark: \"Instead of obsessing about the Singularity or superintelligence, I'm more focused on developing the<strong>AI systems that actually help humans<\/strong>. These AIs should<strong>For Users<\/strong>, to be part of its team rather than pursuing unattainable theoretical goals.\"<\/li>\n<li><strong>OpenAI CEO Altman:<\/strong>By 2025, we may see systems with \"generalized artificial intelligence\" (AGI) capabilities for the first time. Such systems could perform complex tasks like humans, and even<strong>Ability to use multiple tools to solve problems<\/strong>. \"Maybe in 2025, we'll see some AGI systems where people will marvel, '<strong>Wow! That was beyond my wildest dreams.<\/strong>'\"<\/li>\n<li><strong>2024 Nobel Prize-winning physicist and \"Godfather of AI\" Hinton:<\/strong>\"Over the years, I've come to think more and more of AI as an<strong>Advanced Intelligence Beyond Humanity<\/strong>The question is: what does it mean for human society if AI has the ability to surpass human intelligence and even control us? Because of this, we need to seriously think about the question of what it means for human society if AI possesses intelligence beyond that of humans and even has the ability to control us. This<strong>It's not science fiction.<\/strong>, but rather tangible risks.\" He told a BBC program later this month that<strong>There is a 10% to 20% probability that AI will lead to the extinction of the human race in the next thirty years.<\/strong>and warned that the technology was changing \"much faster than expected\".<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>Meta Chief Scientist and Turing Award winner Yann LeCun (note: Yann LeCun, French) talked about his views on general artificial intelligence on the 29th \"Into the Impossible\" podcast. He said that the negative effects of AI are currently overblown and that its capabilities are still very limited. \"In the most optimistic scenario, the realization of AGI is at least 5-6 years away.\" Currently, society is generally worried about AI, and there are even some views about the \"possible doom\" of AI, which Yang Likun believes ignores the actual development of AI and its potential positive impact. He said that at present, AI's ability to understand and operate the physical world is still very limited, because it is mainly through text data.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[151,297,5346],"collection":[],"class_list":["post-25934","post","type-post","status-publish","format-standard","hentry","category-news","tag-agi","tag-meta","tag-5346"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25934","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=25934"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25934\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=25934"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=25934"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=25934"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=25934"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}