July 15 news, according to ant technology news, the World Digital Academy of Sciences (WDTA) in the United Nations headquarters in Geneva recently officially released the AI STR series of new standards "AI AgentOperational Safety Test Standards, standards byAnt GroupLed by Tsinghua University and China Telecom, and jointly compiled by more than 20 domestic and international organizations, enterprises and universities, including PricewaterhouseCoopers, Nanyang Technological University in Singapore and Washington University in St. Louis, USA.The world's first safety test standard for single-intelligent-body operation.

According to reports, the standard is aimed at the "behavior" risk brought by the intelligent body across the "language wall", and for the first time, it corresponds the five key links of input/output, big model, RAG, memory and tools with the operation environment, and builds a framework for risk analysis of the whole link; at the same time, it breaks down the risk types of the intelligent body, improves and innovates the test methods such as model detection, network communication analysis and tool fuzzy test. At the same time, we subdivided the risk types of intelligences, and improved and innovatively proposed testing methods such as model detection, network communication analysis and tool fuzzy testing.Bridging the gap in safety testing technology standards for smart bodies.
1AI learned from Ant Technology that the standard not only provides a set of feasible and reliable security benchmarks for intelligent bodies, but also provides a global AI Agentecological security, credibility and sustainability adds useful exploration. Currently.Some of the standards of measurement and certification have been applied in the financial, medical and other fields..
Previously, WDTA has released three AI STR standards, including "Generative Artificial Intelligence Application Safety Test Standard", "Big Language Model Safety Test Methodology" and "Big Model Supply Chain Safety Requirements", with the participation of experts and scholars from OpenAI, Ant Group, Google, NVIDIA, Baidu, Tencent and dozens of other organizations. These standards include "Generative AI Application Security Testing Standards", "Big Language Model Security Testing Methods" and "Big Model Supply Chain Security Requirements", and are jointly participated by experts and scholars from dozens of organizations, including OpenAI, Ant Group, KDDI, Google, Microsoft, NVIDIA, Baidu, Tencent, etc.