{"id":14065,"date":"2024-06-26T09:00:25","date_gmt":"2024-06-26T01:00:25","guid":{"rendered":"https:\/\/www.1ai.net\/?p=14065"},"modified":"2024-06-26T09:00:25","modified_gmt":"2024-06-26T01:00:25","slug":"deepmind%e5%8f%91%e7%8e%b0%e6%94%bf%e6%b2%bb%e6%b7%b1%e5%ba%a6%e4%bc%aa%e9%80%a0%e6%98%afai%e6%81%b6%e6%84%8f%e4%bd%bf%e7%94%a8%e7%9a%84%e9%a6%96%e8%a6%81%e9%97%ae%e9%a2%98","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/14065.html","title":{"rendered":"DeepMind finds political deepfakes are top problem for malicious use of AI"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a> <a href=\"https:\/\/www.1ai.net\/en\/tag\/deepmind\" title=\"_Other Organiser\" target=\"_blank\" >DeepMind<\/a> For the first time, a survey was conducted on the most common malicious <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e5%ba%94%e7%94%a8\" title=\"[SEE ARTICLES WITH [AI APPLICATION] LABELS]\" target=\"_blank\" >AI Applications<\/a>The study, a collaboration between Google\u2019s AI division DeepMind and Jigsaw, a Google-owned research unit, aims to quantify the risks of generating AI tools that have been marketed by the world\u2019s largest technology companies in the pursuit of huge profits.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14066\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/6385492020171742823527344.png\" alt=\"\" width=\"653\" height=\"566\" \/><\/p>\n<p>Technology-related motivations for bad actors<\/p>\n<p>The study found that creating realistic but fake images, videos, and audio of people was nearly the most common abuse of generative AI tools, nearly twice as common as the next most common way to falsify information using text tools such as chatbots. The most common goal of abusing generative AI was to influence public opinion, which accounted for 27% of uses, raising concerns about how deepfakes could influence elections around the world this year.<\/p>\n<p>Deepfakes of British Prime Minister Rishi Sunak and other global leaders have appeared on TikTok, Facebook and Instagram in recent months. British voters will go to the polls in next week&#039;s general election. Despite efforts by social media platforms to label or remove such content, people may not recognize it as fake, and the spread of content could influence voters&#039; votes. DeepMind researchers analyzed about 200 instances of abuse involving social media platforms Facebook and Reddit, as well as online blogs and media reporting on the abuse.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14067\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/6385492015500977662649171.png\" alt=\"\" width=\"727\" height=\"493\" \/><\/p>\n<p>The study found that the second-largest motivation for abusing AI-generated products, such as OpenAI\u2019s ChatGPT and Google\u2019s Gemini, is to make money, whether by offering services to create deepfakes or using generative AI to create large amounts of content, such as fake news articles. The study found that most abuses use easily available tools and \u201crequire minimal technical expertise,\u201d meaning more bad actors can abuse generative AI.<\/p>\n<p>DeepMind\u2019s research will influence how it improves the safety of its assessment models, and it hopes this will also influence how its competitors and other stakeholders view \u201cmanifestations of harm.\u201d<\/p>","protected":false},"excerpt":{"rendered":"<p>A first-of-its-kind study of the most common malicious AI applications by Google DeepMind has revealed that artificial intelligence (AI) \"Deepfakes\" that generate fake politicians and celebrities are more prevalent than AI-assisted attempts at cyberattacks. The study, a collaboration between Google's AI division DeepMind and Jigsaw, a Google-owned research and development unit, aimed to quantify the risks of generating AI tools that have been marketed by the world's largest tech companies in search of huge profits. Motivations of bad tech-related actors The study found that the act of producing real but fake images, videos, and audio of people was almost certainly the most abused type of generative AI tool, with the next highest number of abuses over other next-highest uses of text-based tools (such as chat machine<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[901,593,281],"collection":[],"class_list":["post-14065","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-deepmind","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/14065","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=14065"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/14065\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=14065"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=14065"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=14065"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=14065"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}