{"id":4781,"date":"2024-03-02T09:17:10","date_gmt":"2024-03-02T01:17:10","guid":{"rendered":"https:\/\/www.1ai.net\/?p=4781"},"modified":"2024-03-02T09:17:10","modified_gmt":"2024-03-02T01:17:10","slug":"openai%e5%ae%a3%e5%b8%83%e4%b8%8efigure%e5%90%88%e4%bd%9c-%e5%b0%86gpt%e6%95%b4%e5%90%88%e5%88%b0%e6%9c%ba%e5%99%a8%e4%ba%ba","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/4781.html","title":{"rendered":"OpenAI announces collaboration with Figure to integrate GPT into robots"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a>Announcing a partnership with Unicorn<a href=\"https:\/\/www.1ai.net\/en\/tag\/figure\" title=\"_Other Organiser\" target=\"_blank\" >Figure<\/a>Collaboration aims to build the next generation of AI grand models to enhance the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%9c%ba%e5%99%a8%e4%ba%ba\" title=\"[Sees articles with [robots] labels]\" target=\"_blank\" >robot<\/a>s language processing and reasoning capabilities.Figure01 learns to achieve tasks such as making coffee, and with OpenAI's multimodal modeling, its capabilities are expected to be further enhanced.<\/p>\n<p>The collaboration aims to enhance the robot's intelligence, particularly with regard to language processing and reasoning.Figure01 demonstrates the ability to autonomously perform realistic tasks, such as lifting boxes, in a video that shows flexible use of visual models and force feedback. Meanwhile, Figure01 has made a breakthrough in bipedal walking technology.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4782\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/6384489863064191942662767.jpg\" alt=\"\" width=\"726\" height=\"407\" \/><\/p>\n<p>OpenAI invests in Figure and applies its multimodal model to robotics, emphasizing the paradigm of using AI to oversee AI. Researchers believe that the next decade could see the emergence of smarter-than-human<span class=\"spamTxt\">super<\/span>intelligence, and Figure demonstrates the potential of robots to handle real-world tasks.<\/p>\n<p>Figure founder Brett Adcock has built three successful tech companies, Vettery, Archer, and Figure, which is currently valued at $2.6 billion. His entrepreneurial journey and accomplishments have attracted numerous investors, including OpenAI, Microsoft, NVIDIA, etc. Adcock's goal is to change the way humans work through humanoid robots, enabling dangerous workplaces without human intervention.<\/p>\n<p>Brett Adcock's entrepreneurial success and determination to change the world with technology has brought his company Figure AI into the spotlight. Adcock's entrepreneurial journey is closely linked to his personal background, and his determination and strength will continue to propel Figure towards global impact.<span class=\"spamTxt\">maximum<\/span>The goal of the company is to move forward.<\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI has announced a collaboration with unicorn Figure, aiming to build a next-generation AI macromodel that enhances the robot's language processing and reasoning capabilities.Figure01 learns to achieve tasks such as making coffee, and its capabilities are expected to be further enhanced with OpenAI's multimodal model. The collaboration aims to enhance the robot's intelligence, especially in language processing and reasoning, and Figure01 demonstrates its ability to autonomously perform realistic tasks, such as lifting boxes, in a video that shows flexible use of visual models and force feedback. Meanwhile, Figure01 has made a breakthrough in bipedal walking technology. OpenAI invested in Figure and applied its multimodal model to robotics, emphasizing the paradigm of using AI to oversee AI. The researcher believes that in the next decade<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1343,190,909],"collection":[],"class_list":["post-4781","post","type-post","status-publish","format-standard","hentry","category-news","tag-figure","tag-openai","tag-909"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/4781","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=4781"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/4781\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=4781"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=4781"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=4781"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=4781"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}