{"id":36238,"date":"2025-05-28T11:18:53","date_gmt":"2025-05-28T03:18:53","guid":{"rendered":"https:\/\/www.1ai.net\/?p=36238"},"modified":"2025-05-28T11:18:53","modified_gmt":"2025-05-28T03:18:53","slug":"%e5%ae%89%e5%85%a8%e5%85%ac%e5%8f%b8%e6%8a%ab%e9%9c%b2-ai%e4%bb%a3%e7%a0%81%e5%8a%a9%e6%89%8b-gitlab-duo-%e6%8f%90%e7%a4%ba%e8%af%8d%e6%b3%a8%e5%85%a5%e6%bc%8f%e6%b4%9e%ef%bc%8c%e5%8f%af%e5%88%a9","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/36238.html","title":{"rendered":"Security Firm Discloses AI Code Assistant GitLab Duo Prompt Word Injection Vulnerability, Can Be Used to \"Hide Drugs\" in Unicode \/ Base16, etc."},"content":{"rendered":"<p>In a May 27 post, security firm Legit Security revealed that GitLab's AI assistant\u00a0<a href=\"https:\/\/www.1ai.net\/en\/tag\/gitlab-duo\" title=\"_Other Organiser\" target=\"_blank\" >GitLab Duo<\/a> Presence of a cue word injection<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%bc%8f%e6%b4%9e\" title=\"_Other Organiser\" target=\"_blank\" >loophole<\/a>Hackers can make GitLab Duo output whatever the hacker wants to accomplish by means of a prompt injection.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-36239\" title=\"c8b816b5j00swybtq009gd000nw00egp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/05\/c8b816b5j00swybtq009gd000nw00egp.jpg\" alt=\"c8b816b5j00swybtq009gd000nw00egp\" width=\"860\" height=\"520\" \/><\/p>\n<p>GitLab Duo, an AI assistant built on Claude, a large language model developed by Anthropic, is described as launching in June 2023, with the addition of the Duo Chat chatbot feature in April 2024, allowing users to interact with it through natural language. The assistant provides developers with code suggestions, code reviews, analyzes merge requests, and more.<\/p>\n<p>1AI refers to Legit Security's report that the company has discovered a remote hint injection vulnerability in GitLab Duo, which allows hackers to bury malicious hints in the GitLab project in advance in order to avoid detection by security systems.<strong>The researchers used Unicode Smuggling, Base16 encoding, and KaTeX rendering.<\/strong>This is so that the content of these tips does not appear directly on the site and is not visible to the user, but is still read by Duo when generating suggestions.<\/p>\n<p>According to the security firm, the content of the corresponding prompts could tamper with code suggestions generated by Duo (e.g., instructing the AI to add malicious JavaScript packages to the output), embed malicious links in Duo responses, or make Duo incorrectly believe that a malicious merge request is safe for a variety of hacker-specific purposes.<\/p>\n<p>The corresponding company has now notified GitLab of the issue in February of this year.<strong>GitLab subsequently confirmed the vulnerability and completed a fix<\/strong>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In a May 27 post, security firm Legit Security disclosed a prompt injection vulnerability in GitLab's AI assistant GitLab Duo, which allows a hacker to make GitLab Duo output whatever the hacker wants to achieve by means of a prompt injection. GitLab Duo, an AI assistant built on Claude, a large language model developed by Anthropic, is described as launching in June 2023, with the addition of the Duo Chat chatbot feature in April 2024, which allows users to interact with it through natural language. The assistant provides the opportunity to develop<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[2719,6769,2994],"collection":[],"class_list":["post-36238","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-gitlab-duo","tag-2994"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36238","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=36238"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36238\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=36238"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=36238"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=36238"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=36238"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}