{"id":27348,"date":"2025-01-19T13:03:45","date_gmt":"2025-01-19T05:03:45","guid":{"rendered":"https:\/\/www.1ai.net\/?p=27348"},"modified":"2025-01-19T13:05:55","modified_gmt":"2025-01-19T05:05:55","slug":"%e8%8b%b1%e4%bc%9f%e8%be%be%e6%8e%a8%e5%87%ba-nim-ai-%e6%8a%a4%e6%a0%8f%e6%9c%8d%e5%8a%a1%ef%bc%8c%e9%98%b2%e6%ad%a2%e6%a8%a1%e5%9e%8b%e9%81%ad%e7%94%a8%e6%88%b7%e8%b6%8a%e7%8b%b1","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/27348.html","title":{"rendered":"NVIDIA Launches NIM AI Fence Service to Prevent Models from Being \"Jailbroken\" by Users"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e4%bc%9f%e8%be%be\" title=\"Look at the article with the label\" target=\"_blank\" >Nvidia<\/a>announced an AI guardrail service called \"NIM,\" which is now available as NVIDIA's NeMo Guardrails suite, enabling developers to add a set of guardrail rules to Large Language Models (LLMs) designed to address the issue of users \"jailbreaking\" the Large Model by prompting for words that do not meet expectations. The goal is to address the issue of users \"jailbreaking\" the LLM by prompting words, preventing the AI from generating content that doesn't meet expectations.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-27349\" title=\"60ef7bbcj00sqbkph002zd000sh00m5p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/60ef7bbcj00sqbkph002zd000sh00m5p.jpg\" alt=\"60ef7bbcj00sqbkph002zd000sh00m5p\" width=\"1025\" height=\"797\" \/><\/p>\n<p>NVIDIA says the corresponding AI fencing service suite<strong>Trained on NVIDIA's Aegis content security dataset<\/strong>The dataset contains 35,000 labeled data samples and is publicly available on Hugging Face at the following address (<a href=\"https:\/\/huggingface.co\/datasets\/nvidia\/Aegis-AI-Content-Safety-Dataset-2.0\">Click here to visit<\/a>).<\/p>\n<p>NVIDIA pointed out that the corresponding AI fencing suite is characterized by small size and high efficiency, and can run smoothly in most occasions. Enterprises can directly embed the corresponding security suite in the development of AI models, which can improve AI security deployed in healthcare, automotive, manufacturing, and other fields.<\/p>\n<p>In addition, NVIDIA has provided announced a vulnerability scanning tool called Garak to test the security of models against the possibility of them outputting hallucinatory content or leaking confidential information within the organization.<\/p>","protected":false},"excerpt":{"rendered":"<p>Ying Weidar announced the launch of an AI guard service called \u201cNIM\u201d, which is now available in the form of a NeMo Guardrails package, which allows developers to add a series of fence rules to the Large Language Model (LLM) to address users' failure to generate content that does not meet expectations through a large-scale \"breakout\" model with a hint. According to Ying Weida, the corresponding AI protection column service package is based on training in the Aegis Content Security Data Set of Ying Weida, which contains 35,000 sampled data and has been made available on Hugging Face with the following address (point of access). Ying Wei Da has pointed out that the corresponding AI protection package is small and efficient<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[239],"collection":[],"class_list":["post-27348","post","type-post","status-publish","format-standard","hentry","category-news","tag-239"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27348","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=27348"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/27348\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=27348"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=27348"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=27348"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=27348"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}