{"id":25256,"date":"2024-12-18T11:04:42","date_gmt":"2024-12-18T03:04:42","guid":{"rendered":"https:\/\/www.1ai.net\/?p=25256"},"modified":"2024-12-18T11:04:42","modified_gmt":"2024-12-18T03:04:42","slug":"%e8%b0%b7%e6%ad%8c%ef%bc%9a%e5%8f%aa%e8%a6%81%e6%9c%89%e4%ba%ba%e5%b7%a5%e7%9b%91%e7%9d%a3%ef%bc%8c%e5%ae%a2%e6%88%b7%e5%8d%b3%e5%8f%af%e5%9c%a8%e9%ab%98%e9%a3%8e%e9%99%a9%e9%a2%86","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/25256.html","title":{"rendered":"Google: Customers can use its AI to make decisions in 'high-risk' areas as long as there's human oversight"},"content":{"rendered":"<p>December 18th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a>It was made clear by way of an update to the utilization policy that customers could be in the \"high risk\" areas as long as there was manual supervision (<strong>e.g. healthcare<\/strong>) using its generative formula<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd\" title=\"[View articles tagged with [artificial intelligence]]\" target=\"_blank\" >AI<\/a>tools to make \"automated decisions\".<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-25257\" title=\"63108c75j00soo5uj002dd000on00q8p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/63108c75j00soo5uj002dd000on00q8p.jpg\" alt=\"63108c75j00soo5uj002dd000on00q8p\" width=\"887\" height=\"944\" \/><\/p>\n<p>Customers can use Google's generative AI to make under certain conditions, according to an updated version of the company's Generative AI Prohibited Use Policy released Tuesday<strong>\"Automated decision-making\" that may have a significant adverse impact on the rights of individuals<\/strong>For example, in<strong>Employment, housing, insurance and social benefits<\/strong>and other areas. These decisions are made as long as they are made in<strong>Some form of human oversight<\/strong>, then it was allowed to be implemented.<\/p>\n<p>Note: In the field of artificial intelligence, automated decision-making refers to decisions made by an AI system based on factual or inferred data. For example, AI might<strong>Decision on loan approval based on applicant's data<\/strong>, or screening job applicants.<\/p>\n<p>Google's previous draft terms stated that<strong>High-Risk Automated Decisions Involving Generative AI Should Be Completely Banned<\/strong>.... But Google revealed to foreign media outlet TechCrunch that its generative AI \"<strong>Never actually prohibited<\/strong>\"Automated decision-making in overly high-risk areas.<strong>Provided there is human supervision<\/strong>.<\/p>\n<p>A Google spokesperson said in an interview, \"The manual supervision requirement has always existed and applies to<strong>All high-risk areas<\/strong>.\" He added: \"We've simply re-categorized the terms and listed some specific examples more clearly, with the aim of making it clearer for users.\"<\/p>\n<p>In response to automated decisions that affect individuals, regulators have expressed concern about the potential bias of AI. For example, studies have shown that AI systems used to approve credit and mortgage applications may exacerbate historical problems of discrimination.<\/p>\n<p>Because automated decision-making systems can affect individuals, regulators are concerned about the potential bias issues of such AI systems, according to the report. Research suggests that AI systems used for credit and mortgage approvals may exacerbate historical discrimination problems.<\/p>","protected":false},"excerpt":{"rendered":"<p>ON DECEMBER 18, IN THE FORM OF AN UPDATED USE POLICY, GOOGLE MADE IT CLEAR THAT, WITH MANUAL SUPERVISION, CLIENTS COULD MAKE \u201cAUTOMATED DECISION-MAKING\u201d USING THEIR GENERATING ARTIFICIAL INTELLIGENCE TOOLS IN \u201cHIGH-RISK\u201d AREAS, SUCH AS HEALTH CARE. ACCORDING TO THE UPDATED GENERATION AI PROHIBITION POLICY ISSUED BY THE COMPANY ON TUESDAY, CLIENTS CAN USE GOOGLE GENERATION AI UNDER CERTAIN CONDITIONS FOR \u201cAUTOMATED DECISION-MAKING\u201d THAT MAY HAVE A SIGNIFICANT ADVERSE IMPACT ON INDIVIDUAL RIGHTS, SUCH AS IN THE AREAS OF EMPLOYMENT, HOUSING, INSURANCE AND SOCIAL WELFARE. THESE DECISIONS ARE ALLOWED TO BE IMPLEMENTED IN SOME FORM OF HUMAN OVERSIGHT. NOTE: IN THE AREA OF ARTIFICIAL INTELLIGENCE, AUTOMATED DECISION-MAKING REFERS TO DECISIONS MADE BY THE AI SYSTEM BASED ON FACTS OR EXTRAPOLATIONS OF DATA. FOR EXAMPLE, AI MAY BE BASED ON APPLICANT DATA<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[204,281],"collection":[],"class_list":["post-25256","post","type-post","status-publish","format-standard","hentry","category-news","tag-204","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25256","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=25256"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25256\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=25256"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=25256"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=25256"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=25256"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}