{"id":29978,"date":"2025-03-04T10:42:56","date_gmt":"2025-03-04T02:42:56","guid":{"rendered":"https:\/\/www.1ai.net\/?p=29978"},"modified":"2025-03-04T10:42:56","modified_gmt":"2025-03-04T02:42:56","slug":"%e8%80%b6%e9%b2%81%e3%80%81%e5%89%91%e6%a1%a5%e7%ad%89%e9%ab%98%e6%a0%a1%e8%81%94%e5%90%88%e6%8e%a8%e5%87%ba-mindllm-%e5%8c%bb%e7%96%97%e8%a1%8c%e4%b8%9a-ai-%e5%b7%a5%e5%85%b7%ef%bc%8c%e5%8f%af","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/29978.html","title":{"rendered":"Yale, Cambridge, and other universities join forces to launch MindLLM healthcare industry AI tool that turns brain MRI data into intuitive textual information"},"content":{"rendered":"<p>Yale University, in conjunction with researchers from the University of Cambridge and Dartmouth College, has launched a program called <a href=\"https:\/\/www.1ai.net\/en\/tag\/mindllm\" title=\"_Other Organiser\" target=\"_blank\" >MindLLM<\/a> of<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%8c%bb%e7%96%97%e8%a1%8c%e4%b8%9a\" title=\"[Sees articles with labels]\" target=\"_blank\" >Healthcare industry<\/a>\u00a0AI tool that converts functional magnetic resonance imaging (fMRI) data of the brain into text, outperforming industry technologies such as UMBRAE, BrainChat, UniBrain, and others in multiple benchmark tests.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29979\" title=\"74eb471dj00sskvi800brd000v900klp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/74eb471dj00sskvi800brd000v900klp.jpg\" alt=\"74eb471dj00sskvi800brd000v900klp\" width=\"1125\" height=\"741\" \/><\/p>\n<p>MindLLM is described as consisting of an fMRI encoder and a large-scale language model that analyzes stereoscopic pixels (Voxels) in fMRI scans to interpret brain activity. Its fMRI encoder employs a neuroscientific attention mechanism that adapts to the shape of different input signals to enable multiple analysis tasks.<\/p>\n<p>The research team also introduced the Brain Instruction Tuning (BIT) method for the corresponding tool, which enhances the model's ability to extract a wide range of semantic information from fMRI signals, enabling it to perform a variety of decoding tasks, such as image description and question-and-answer reasoning.<\/p>\n<p>The test results show that in the benchmark tests of text decoding, cross-individual generalization, and adaptation to new tasks, MindLLM improves up to 12.0%, 16.4%, and 25.0% relative to the industry model, proving that it outperforms the traditional model in adapting to new subjects and handling unknown linguistic reasoning.<\/p>\n<p>but,<strong>The researchers mentioned that currently the model can only analyze static image signals<\/strong>In the future, if further improved, it is expected to be developed into a real-time fMRI decoder and widely used in the fields of neural control, brain-computer interface and cognitive neuroscience, which will have a positive impact on the applications of neural prostheses for repairing perceptual ability, mental state monitoring and brain-computer interaction.<\/p>\n<p>The corresponding paper is now available on ArXiv, with the address of the paper (<a href=\"https:\/\/arxiv.org\/abs\/2502.15786\">Click here to visit<\/a>).<\/p>","protected":false},"excerpt":{"rendered":"<p>Yale University, in conjunction with researchers from the University of Cambridge and Dartmouth College, has launched an AI tool for the healthcare industry called MindLLM, which is able to convert functional magnetic resonance imaging (fMRI) data of the brain into text, outperforming the industry's technologies such as UMBRAE, BrainChat, and UniBrain in several benchmark tests. MindLLM, which consists of an fMRI encoder and a large-scale language model, is said to be able to analyze stereoscopic pixels (Voxels) in fMRI scans to interpret brain activity. Its fMRI encoder employs a neuroscientific attention mechanism that adapts to the shape of different input signals to enable multiple analysis tasks. The team also introduced brain commands for the corresponding tools<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5870,5871],"collection":[],"class_list":["post-29978","post","type-post","status-publish","format-standard","hentry","category-news","tag-mindllm","tag-5871"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/29978","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=29978"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/29978\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=29978"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=29978"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=29978"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=29978"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}