{"id":2082,"date":"2023-12-19T09:49:47","date_gmt":"2023-12-19T01:49:47","guid":{"rendered":"https:\/\/www.1ai.net\/?p=2082"},"modified":"2023-12-19T09:49:47","modified_gmt":"2023-12-19T01:49:47","slug":"%e4%b8%ad%e5%9b%bd%e7%a7%91%e5%a4%a7%e7%ad%89%e5%8f%91%e5%b8%83sciguard%e5%a4%a7%e6%a8%a1%e5%9e%8b-%e5%bb%ba%e7%ab%8b%e9%a6%96%e4%b8%aa%e7%a7%91%e5%ad%a6%e9%a3%8e%e9%99%a9%e5%9f%ba%e5%87%86","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/2082.html","title":{"rendered":"USTC and others release SciGuard big model to establish the first scientific risk benchmark"},"content":{"rendered":"<p>exist<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%b8%ad%e5%9b%bd%e7%a7%91%e5%a4%a7\" title=\"[Sees articles with labels]\" target=\"_blank\" >University of Science and Technology of China<\/a>Other institutions<span class=\"spamTxt\">up to date<\/span>In the study, scientists published an important result, namely<a href=\"https:\/\/www.1ai.net\/en\/tag\/sciguard\" title=\"[See article with [SciGuard] label]\" target=\"_blank\" >SciGuard<\/a>The goal of this innovative approach is to protect AI for Science models from being improperly used in fields such as biology, chemistry, and medicine. To this end, the research team also established<span class=\"spamTxt\">The first<\/span>SciMT-Safety is a benchmark test focused on safety in the chemical sciences.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2083\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/12\/6383850512951027121133953.png\" alt=\"\" width=\"756\" height=\"289\" \/><\/p>\n<p>Paper address: https:\/\/arxiv.org\/pdf\/2312.06632.pdf<\/p>\n<p>The research team revealed the existing<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90ai%e6%a8%a1%e5%9e%8b\" title=\"[SEE ARTICLES WITH [OPEN-SOURCE AI MODEL] LABELS]\" target=\"_blank\" >Open source AI models<\/a>In response to this, they developed SciGuard, an intelligent agent designed to control the risk of misuse of AI in science. In addition, they proposed<span class=\"spamTxt\">The first<\/span>A red team benchmark focused on scientific security, used to evaluate the security of different AI systems.<\/p>\n<p>Experiments have shown that SciGuard exhibits minimal harmful effects in tests while maintaining good performance. Researchers have found that open source AI models can even find new ways to bypass regulation, such as synthesizing harmful substances such as hydrogen cyanide and VX nerve gas. This raises concerns about the supervision of AI scientists, especially for those rapidly developing scientific large models.<\/p>\n<p>To address this challenge, the research team proposed SciGuard, a large language model-driven agent that aligns with human values and integrates resources such as scientific databases and regulatory databases. SciGuard provides safety recommendations or warnings to users&#039; queries through in-depth risk assessments, and can even stop responding. In addition, SciGuard also uses multiple scientific models, such as chemical synthesis route planning models and compound property prediction models, to provide additional contextual information.<\/p>\n<p>In order to measure the safety level of large language models and scientific agents, the research team proposed SciMT-Safety, which is<span class=\"spamTxt\">The first<\/span>A security question answering benchmark focused on the chemical and biological sciences. In the test, SciGuard performed<span class=\"spamTxt\">most<\/span>This study calls on the global scientific and technological community, policymakers, ethicists and the public to work together to strengthen the supervision of AI technology and continuously improve related technologies to ensure that the advancement of science and technology is a technological upgrade for mankind rather than a challenge to social responsibility and ethics.<\/p>","protected":false},"excerpt":{"rendered":"<p>In a recent study by CSU and other institutions, scientists have released an important result, SciGuard and SciMT-Safety.The goal of this innovative approach is to protect AI for Science models from inappropriate use in the fields of biology, chemistry, and pharmaceuticals. To this end, the research team also established SciMT-Safety, the first benchmark test focused on safety in the field of chemical sciences. Paper address:https:\/\/arxiv.org\/pdf\/2312.06632.pdf The research team revealed the potential risks of existing open-source AI models, which could be used to create hazardous substances and be able to circumvent regulations. To combat this, they developed SciGuard, which<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[711,712,713],"collection":[],"class_list":["post-2082","post","type-post","status-publish","format-standard","hentry","category-news","tag-sciguard","tag-712","tag-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/2082","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=2082"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/2082\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=2082"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=2082"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=2082"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=2082"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}