{"id":38034,"date":"2025-06-21T13:20:45","date_gmt":"2025-06-21T05:20:45","guid":{"rendered":"https:\/\/www.1ai.net\/?p=38034"},"modified":"2025-06-21T13:20:45","modified_gmt":"2025-06-21T05:20:45","slug":"openai-%e7%a0%94%e7%a9%b6%e5%91%98%ef%bc%9a%e6%a8%a1%e5%9e%8b%e6%80%9d%e8%80%83%e5%a6%82%e5%90%8c%e4%ba%ba%e7%b1%bb%e5%a4%a7%e8%84%91%e7%9a%ae%e5%b1%82%e8%bf%9b%e5%8c%96","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/38034.html","title":{"rendered":"OpenAI Researcher: Model Thinking Like Human Cortex Evolution"},"content":{"rendered":"<p>A few days ago,<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%a0%94%e7%a9%b6%e5%91%98\" title=\"[Sees articles containing [researcher] labels]\" target=\"_blank\" >researcher<\/a> Noam Brown in an interview with Latent Space, where he delves into the AI reasoning paradigm and the future of multi-intelligence on the program.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-38035\" title=\"eba11ccdj00sy6xh5001yd000u000gpm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/06\/eba11ccdj00sy6xh5001yd000u000gpm.jpg\" alt=\"eba11ccdj00sy6xh5001yd000u000gpm\" width=\"1080\" height=\"601\" \/><\/p>\n<p>Speaking about the future of AI-human collaboration, Noam emphasized the importance of reasoning paradigms and model scale: systematic deep thinking can only work if there are large models with underlying cognitive capabilities, Noam explained.<strong>This may be similar to brain evolution: the need to develop the cerebral cortex first. To some extent, animals also need basic intelligence in order to think deeply.<\/strong><\/p>\n<p>At the level of AI security, Noam shared the idea of constraining AI through controlled action conditions, arguing that this may be an important way to achieve secure alignment. He also pointed out that multi-model routing and scaffolded systems of the day, while useful in the short term, may be rendered obsolete by scale effects and native reasoning architectures; in contrast, the data optimization value of Reinforcement Fine-Tuning (RFT) has a more enduring exploitation.<\/p>\n<p>For the future, Noam expects more researchers to apply the inference paradigm to long-period validation scenarios, such as chemistry and drug discovery, in order to break the time and cost bottleneck.<\/p>\n<p>In addition, Noam predicts that AI will not only revolutionize efficiency in programming, but will also become a powerful virtual assistant that knows exactly what the user prefers.<\/p>\n<p>He also mentioned that the potential of AI in multitasking will rapidly explode with the double improvement of computing power and model thinking efficiency. Only by seizing this wave of technology and familiarizing ourselves with inference models and multi-intelligence architecture in advance can we usher in a deeper \"AI\" by 2030.<a href=\"https:\/\/www.1ai.net\/en\/tag\/agi\" title=\"_OTHER ORGANISER\" target=\"_blank\" >AGI<\/a> Moments\".<\/p>","protected":false},"excerpt":{"rendered":"<p>Recently, OpenAI researcher Noam Brown was interviewed by the Latin Space in which he explored in depth the AI Logic Paradigm and Multi-Intelligence Future. Turning to the future of AI working with humans, Noam stressed the importance of the reasoning paradigm and the size of the model: only a large model with basic cognitive capabilities can systematize in-depth thinking work. Noam explains that this may be similar to the evolution of the brain: the brain cortex needs to be developed first. To some extent, animals also need basic intelligence to think in depth. At the AI security level, Noam shared the idea of restraining AI through controllable operational conditions, which may be an important way to achieve security alignment. He also pointed out that the current multi-model road<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[151,190,2831],"collection":[],"class_list":["post-38034","post","type-post","status-publish","format-standard","hentry","category-news","tag-agi","tag-openai","tag-2831"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/38034","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=38034"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/38034\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=38034"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=38034"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=38034"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=38034"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}