{"id":12257,"date":"2024-06-05T09:38:46","date_gmt":"2024-06-05T01:38:46","guid":{"rendered":"https:\/\/www.1ai.net\/?p=12257"},"modified":"2024-06-05T09:38:46","modified_gmt":"2024-06-05T01:38:46","slug":"openai-%e5%92%8c%e8%b0%b7%e6%ad%8c-deepmind-%e5%91%98%e5%b7%a5%e8%81%94%e5%90%8d%e5%8f%91%e5%a3%b0%ef%bc%9a%e9%ab%98%e7%ba%a7%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd%e9%a3%8e%e9%99%a9%e5%b7%a8%e5%a4%a7","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/12257.html","title":{"rendered":"OpenAI and Google DeepMind employees jointly voiced: Advanced artificial intelligence risks are huge and supervision is urgently needed"},"content":{"rendered":"<p data-vmark=\"4140\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> and Google <a href=\"https:\/\/www.1ai.net\/en\/tag\/deepmind\" title=\"_Other Organiser\" target=\"_blank\" >DeepMind<\/a> Several former and current employees of recently jointly issued an open letter expressing concerns about the potential risks of advanced artificial intelligence and the current lack of supervision of artificial intelligence technology companies.<\/p>\n<p data-vmark=\"fa34\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-12258\" title=\"d31f3538-b3c3-4f18-8811-317a475d709c\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/d31f3538-b3c3-4f18-8811-317a475d709c.jpg\" alt=\"d31f3538-b3c3-4f18-8811-317a475d709c\" width=\"640\" height=\"342\" \/><\/p>\n<p>Image source: Pixabay<\/p>\n<p data-vmark=\"87eb\">The open letter points out that the development of artificial intelligence may bring a series of risks.<span class=\"accentTextColor\">For example, it exacerbates existing social inequality, facilitates manipulation and the spread of false information, and out-of-control autonomous artificial intelligence systems may lead to human extinction.<\/span><\/p>\n<p data-vmark=\"b2a3\">The letter states that AI companies have strong economic interests driving them to continue to advance AI research and development, while keeping information about protection measures and risk levels secret. The open letter believes that these companies cannot be expected to voluntarily share this information, so it calls on insiders to speak out.<\/p>\n<blockquote>\n<p data-vmark=\"3f5d\">In the absence of effective government regulation, these current and former employees are one of the few groups that can hold these companies accountable to the public. However, due to strict confidentiality agreements, employees are limited in speaking out and can only report problems to companies that may not be handling them properly. Traditional whistleblower protections do not apply because such measures focus on illegal behavior, and many of the risks that are currently worrying are not regulated.<\/p>\n<\/blockquote>\n<p data-vmark=\"ddd4\"><span class=\"accentTextColor\">Employees call on AI companies to provide robust whistleblower protections for those who expose AI risks<\/span>, specifically including:<\/p>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"daa8\">Do not create or enforce protocols that prevent employees from raising criticisms about risk-related issues;<\/p>\n<\/li>\n<li>\n<p data-vmark=\"6c83\">Providing a verifiably anonymous process to enable employees to raise risk-related concerns with the board, regulators and independent organisations in the relevant field;<\/p>\n<\/li>\n<li>\n<p data-vmark=\"010f\">Support a culture of open criticism, allowing employees to raise concerns about technology-related risks to the public, the board, regulators, etc., while protecting business confidentiality;<\/p>\n<\/li>\n<li>\n<p data-vmark=\"6630\">Avoid retaliation against employees who publicly share confidential risk-related information after other procedures have failed.<\/p>\n<\/li>\n<\/ul>\n<p data-vmark=\"4125\">A total of 13 employees signed the open letter, including 7 former OpenAI employees, 4 current OpenAI employees, 1 former Google DeepMind employee, and 1 current Google DeepMind employee. It is reported that OpenAI threatened to cancel its employees&#039; vested interests because of their voices, and required employees to sign strict confidentiality agreements to restrict them from criticizing the company.<\/p>","protected":false},"excerpt":{"rendered":"<p>Several former and current employees of OpenAI and Google DeepMind recently issued a joint open letter expressing concerns about the potential risks of advanced AI and the current lack of regulation of AI tech companies. Image source Pixabay The open letter points out that the development of AI could pose a range of risks, such as increasing inequality in existing societies, contributing to manipulation and the spread of disinformation, and the possibility that out-of-control autonomous AI systems could lead to the extinction of the human race. The letter writes that AI companies have powerful financial interests driving them to continue advancing AI research and development, while at the same time being reticent to provide information on protective measures and risk levels. The open letter argues that these companies cannot be expected to voluntarily share this information, and therefore calls on insiders to come forward and speak out. As<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[593,190],"collection":[],"class_list":["post-12257","post","type-post","status-publish","format-standard","hentry","category-news","tag-deepmind","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/12257","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=12257"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/12257\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=12257"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=12257"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=12257"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=12257"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}