{"id":19515,"date":"2024-09-07T10:12:12","date_gmt":"2024-09-07T02:12:12","guid":{"rendered":"https:\/\/www.1ai.net\/?p=19515"},"modified":"2024-09-07T10:12:12","modified_gmt":"2024-09-07T02:12:12","slug":"%e5%9b%bd%e5%86%85%e9%a6%96%e4%b8%aa-ai%e5%a4%a7%e6%a8%a1%e5%9e%8b%e6%94%bb%e9%98%b2%e8%b5%9b%e5%90%af%e5%8a%a8%ef%bc%8c%e8%ae%be%e7%ab%8b%e8%bf%91-100-%e4%b8%87%e5%85%83%e5%a5%96%e9%87%91%e6%b1%a0","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/19515.html","title":{"rendered":"China&#039;s first AI large-scale model attack and defense competition is launched, with a prize pool of nearly 1 million yuan"},"content":{"rendered":"<p>On the morning of September 6, at the 2024 Inclusion\u00b7Bund Conference &quot;Using AI to Protect the Offense and Defense of the AI Big Model Era&quot; forum, the first domestic technology competition with the theme of big model attack and defense, &quot;<strong>Global AI Attack and Defense Challenge<\/strong>&quot;Announced the official launch.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-19516\" title=\"c18396f9j00sjf7f50010d000im00b9m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/09\/c18396f9j00sjf7f50010d000im00b9m.jpg\" alt=\"c18396f9j00sjf7f50010d000im00b9m\" width=\"670\" height=\"405\" \/><\/p>\n<p>This competition focuses on the AI big model industry practice.<strong>Designed a two-way track for attack and defense<\/strong>, inviting various &quot;white hat hackers&quot; (programmers who stand in the hacker&#039;s position to attack their own systems to check for security vulnerabilities), and technical talents to conduct practical attack and defense drills on &quot;data poisoning&quot; of the Wenshengtu big model, as well as anti-counterfeiting detection competitions for the content generated by the financial scenario big model.<\/p>\n<ul>\n<li><strong>The &quot;Attack Track&quot; focuses on the practical application risk issues of the Wenshengtu large model<\/strong>Contestants can use a variety of dynamic attack induction techniques such as target hijacking, scenario introduction, and logic nesting to induce the big model to output risk images, thereby activating the potential weaknesses and loopholes of the big model and enhancing the security immunity of the big model&#039;s raw images.<\/li>\n<li><strong>The &quot;defense track&quot; focuses on the detection of credential tampering in financial scenarios in AI authentication<\/strong>, in order to cope with the increasingly serious risks of Deepfake and AIGC fake certificates. The competition provides a training set of millions of credential tampering data. Participants need to develop and train models, and use the corresponding test sets to evaluate the effectiveness of the models and give the probability value of data forgery.<\/li>\n<\/ul>\n<p>The event is co-organized by the Chinese Society of Image and Graphics, Ant Group, and Cloud Security Alliance (CSA) Greater China, and is supported by C9 universities such as Shanghai Jiao Tong University and Zhejiang University, as well as a number of industry-university-research organizations.<\/p>\n<p><strong>The event has set up a prize pool of nearly 1 million yuan<\/strong>, and invited many well-known experts from academia and industry to serve as judges. Wang Yaonan, academician of the Chinese Academy of Engineering and chairman of the Chinese Society of Image and Graphics, and Li Yuhang, chairman of the Greater China region of Cloud Security Alliance (CSA), served as experts on the steering committee of this competition. Nearly 30 well-known scholars from Tsinghua University, Shanghai Jiaotong University, Shanghai Artificial Intelligence Laboratory, Chinese Society of Image and Graphics and other institutions will participate in the organization and evaluation of the competition.<\/p>\n<p><strong>The competition officially started registration on September 6th, and the competition review will be completed in early November.<\/strong>From now on, contestants can register through the official website of the Chinese Society of Image and Graphics, the official website of the Alibaba Cloud Tianchi Big Data Crowd Intelligence Platform, and other channels.<\/p>","protected":false},"excerpt":{"rendered":"<p>On the morning of September 6, at the forum of 2024 Inclusion on the Bund \"AI guards the way of attack and defense in the era of AI big models\", the first domestic technology event on the theme of attack and defense of big models, \"Global AI Attack and Defense Challenge\", was announced to be officially launched. The event focuses on the AI big model industry. This event focuses on AI big model industry practice, designed a two-way track for attack and defense, inviting all kinds of \"white hat hackers\" to stand in the position of hackers to attack their own systems to carry out security vulnerability detection programmers), technical talents to carry out the attack and defense practice of \"data poisoning\" against the big model of the Wenshengtu respectively. \"Attack and defense practice, as well as financial scenarios, large model generated content of the anti-counterfeiting detection competition. The \"Attack Track\" focuses on the risk of the practical application of the big model of Vincennes. Participants can use target hijacking, scenarios, logical nesting, etc.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[433],"collection":[],"class_list":["post-19515","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/19515","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=19515"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/19515\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=19515"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=19515"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=19515"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=19515"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}