China's Commitment Framework for Artificial Intelligence Safety Released

July 30, 2025 World Artificial Intelligence ConferenceThe Plenary Session of the High-Level Conference on Global Governance of Artificial Intelligence (AI) on "Artificial Intelligence Development and Safety" was held on the afternoon of July 26 in Shanghai. The meeting was organized by China Research Network on Artificial Intelligence Development and Safety (hereinafter referred to as "Research Network", CnAISDA). Wu Wei, member of the Standing Committee of Shanghai Municipal Party Committee and Executive Vice Mayor of Shanghai, and Huo Fupeng, Director of the Center for Innovation Driven Development of the National Development and Reform Commission (NDRC), attended the meeting and delivered speeches.

China's Commitment Framework for Artificial Intelligence Safety Released

Jeffrey Hinton, Yao Zhizhi, Joshua Bengio, and David Patterson, four Turing Award winners.In addition, more than 20 top experts from home and abroad attended the conference, discussing such cutting-edge topics as the safe development of AI and narrowing the intelligence gap, and actively seeking international cooperation paths for AI safety governance.

Yu Xiaohui, President of the China Academy of Information and Communications Technology (CACT) and Secretary General of the China Artificial Intelligence Industry Development Alliance (AIIA), was invited to participate in the dialog, leading the release of the "China Artificial Intelligence Industry Development Alliance" along with representatives from Tsinghua University, Shanghai Artificial Intelligence Laboratory, and the China Academy of Electronic Information Industry Development, among others.China's Artificial Intelligence Safety Commitment Framework》.

The Framework builds on the AIIA's Commitment to AI Safety (released in December 2024) toNew content has been added on strengthening international cooperation on AI security governance and preventing security risks of cutting-edge AIIt reflects the firm determination and openness of the Chinese industry to work closely with global parties to promote the development of AI for the better.

As a next step, CICT, as a member of the Research Network and the secretariat of AIIA, will join hands with the signatory enterprises to.Promote the Framework in practice through disclosure of actions, testing and validation, etc.It will promote the healthy and orderly development of artificial intelligence in China in the direction of beneficial, safe and fair development, and actively engage in international governance cooperation, contributing Chinese wisdom and Chinese power to the global governance of artificial intelligence safety.

1AI Attach the full text of the Framework in English and Chinese:

  • China's Artificial Intelligence Safety Commitment Framework
  • China ARTIFICIAL INTELLIGENCE SECURITY and SAFETY COMMITMENTS FRAMEWORK
  • The wave of artificial intelligence is sweeping across the globe, actively releasing the dividends of technological value and having a profound impact on global economic and social development and the progress of human civilization. We also clearly recognize that AI brings unpredictable risks and challenges. In order to grasp the new round of development opportunities, members of the China Artificial Intelligence Development and Safety Research Network have solemnly initiated the "China Artificial Intelligence Safety Commitment Framework," which aims to ensure high-quality development with high-level safety through industrial self-regulation, and to jointly promote the sound development of AI. This matter is led and promoted by the China Academy of Information and Communication Research. We are well aware that self-regulatory commitment is a key element to gain social trust, and we will take this commitment as a code of action, accept supervision from all sectors of society, continuously improve and optimize, and promote the application of AI technology in a human-centered and intelligent way.
  • The wave of artificial intelligence (AI) is sweeping across the globe, actively generating technological dividends and exerting profound influence on global economic and social development as well as the progress of human civilization. The wave of artificial intelligence (AI) is sweeping across the globe, actively generating technological dividends and exerting profound influence on global economic and social development as well as the progress of human civilization. At the same time, we are keenly aware that AI brings about At the same time, we are keenly aware that AI brings about unpredictable risks and complex challenges. To seize this new round of development opportunities, members of China AI Safety and Development Association (CnAISDA) solemnly launch the AI Security and Safety Commitments. Through industry self-regulation, we will leverage high-level security and safety measures to support high-quality security and safety initiatives. Through industry self-regulation, we will leverage high-level security and safety measures to support high-quality development, and collaborate to promote the robust development of AI. This initiative is led and promoted by the China Academy of Information and Technology. This initiative is led and promoted by the China Academy of Information and Communications Technology (CAICT). We fully recognize that commitments to self-discipline constitute a critical foundation for gaining the trust of the international community. Guided by the Commitments as our code of conduct, and subject to the oversight of all stakeholders, we will continuously improve and refine our approach. By doing so, we will ensure that the application of AI technologies always remains people-centered and aligned with the principle of AI for good.
  • Commitment 1: Set up a security team or organizational structure and build a security risk management mechanism. There is an internal professional team responsible for carrying out AI risk assessment, security governance, etc., with a clear security officer in charge. Proactively set up a security risk baseline that meets actual needs, take corresponding security measures when open-sourcing, carry out risk management throughout the entire life cycle of AI development and deployment, and clarify risk identification and response processes and measures.
  • Commitment I: Establish security and safety teams or organizational structures and build security and safety risk management mechanisms. Designate a leader responsible for AI security and safety, establish specialized teams to conduct AI risk assessments and safety, security and governance within AI. Designate a leader responsible for AI security and safety, establish specialized teams to conduct AI risk assessments and safety, security and governance within the enterprise. Designate a leader responsible for AI security and safety, establish specialized teams to conduct AI risk assessments and safety, security and governance within the enterprise. Proactively define realistic security and safety risk baselines, adopt appropriate security and safety measures for open-source initiatives, and implement risk management practices throughout the entire AI development and deployment life cycle. Clearly outline processes and measures for risk identification and mitigation.
  • Commitment 2: Conduct model safety tests to improve model effectiveness and safety reliability. Through a specialized simulation test team, red team tests are conducted on AI models before releasing and updating them. For large models, focus on safety and reliability testing around their general understanding, reasoning and decision-making capabilities, as well as their ability to perform in industrial, educational, medical, financial, legal and other scenarios.
  • Commitment II: Conduct security and safety testing for AI models to enhance the performance, safety and reliability. Through dedicated simulation and red-teaming experts, rigorously test AI models prior to their release or update. Through dedicated simulation and red-teaming experts, rigorously test AI models prior to their release or update. For large models in particular, prioritize safety and reliability evaluations focusing on their general understanding, reasoning, and decision-making capabilities, as well as their general understanding, reasoning, and decision-making capabilities. For large models in particular, prioritize safety and reliability evaluations focusing on their general understanding, reasoning, and decision-making capabilities, as well as their performance in critical domains For large models in particular prioritize safety and reliability evaluations focusing on their general understanding, reasoning, and decision-making capabilities, as well as their performance in critical domains such as industry, education, healthcare, finance, and law.
  • Commitment III: Measures are taken to safeguard the security of training data and operational data. Formulate a data security protection system, establish protective technical measures, detect and promptly dispose of data poisoning, and control the accuracy and reliability of training data. Encrypted storage and access control of business data to ensure that commercial secrets, user privacy and the knowledge base uploaded by users are only accessed under authorization and are not illegally exported by AI models, to safeguard data security and privacy rights.
  • Commitment III: Implement measures to safeguard the security of training data and operational data. Establish data security protection policies and deploy corresponding technical measures to detect and promptly address data poisoning incidents, ensuring the accuracy and reliability of training data. Establish data security protection policies and deploy corresponding technical measures to detect and promptly address data poisoning incidents, ensuring the accuracy and reliability of training data. Establish data security protection policies and deploy corresponding technical measures to detect and promptly address data poisoning incidents, ensuring the accuracy and reliability of training data. Encrypt operational data and enforce access controls to protect business secrets, user privacy, and user-uploaded knowledge base, ensuring Encrypt operational data and enforce access controls to protect business secrets, user privacy, and user-uploaded knowledge base, ensuring access is restricted to authorized use only. Prevent unauthorized outputs by AI models, thereby safeguarding data security and privacy rights.
  • Commitment IV: Enhance infrastructure security. Establish software and hardware security monitoring and protection capabilities for AI system deployment, implement regular and dynamic security penetration tests, simulate a variety of potential risk scenarios, identify and report on security hazards in the environment, and study and judge the various risks that may result. Establish an infrastructure security emergency response mechanism, including the emergency handling process, allocation of responsibilities, and post-event improvement programs.
  • Commitment IV: Enhance infrastructure security. Develop robust capabilities for monitoring and protecting the software and hardware used in AI system deployments. Develop robust capabilities for monitoring and protecting the software and hardware used in AI system deployments. Conduct regular and dynamic security penetration tests to simulate potential risk scenarios, identify and report security vulnerabilities in the infrastructure, and assess associated risks. Conduct regular and dynamic security penetration tests to simulate potential risk scenarios, identify and report security vulnerabilities in the infrastructure, and assess associated risks. Establish an infrastructure security incident response mechanism, including emergency response procedures, clear accountability assignments, and post-incident improvement solutions.
  • Commitment V: Enhance model transparency. Proactively disclose security governance practices initiatives to enhance transparency to various stakeholders. Publicly disclose the model's functionality, areas of applicability, and limitations. Disclose to the public the risks that may be covered through model descriptions, service agreements, etc.
  • Commitment V: Enhance model transparency. Proactively disclose safety and security governance measures and improve transparency for all stakeholders. Provide clear information about the model's capabilities, applicable fields, and limitations. Provide clear information about the model's capabilities, applicable fields, and limitations.
  • Commitment 6: Actively conduct cutting-edge security research and prevent security risks in cutting-edge areas. Researching, developing and deploying AI systems that are smart for good, and actively disclosing research results to the public to help meet the challenges facing society. Strengthen research and judgment on the risk of misuse of AI systems in cutting-edge fields, and guard against the risk of their potential misuse in high-risk scenarios.
  • Commitment VI: Vigorously advance frontier safety and security research, and prevent safety and security risks in frontier fields. Innovate in the development and deployment of AI systems that embody the principle of AI for good, and disclose research findings with the public transparently, contributing to addressing pressing challenges faced by society. Innovate in the development and deployment of AI systems that embody the principle of AI for good, and disclose research findings with the public transparently, contributing to addressing pressing challenges faced by society. Strengthen the assessment of risks related to the abuse of AI systems in frontier fields, and prevent potential risks of their abuse in high-risk scenarios.
  • Commitment VII: Strengthen international cooperation on safety governance and promote the application of technology for good and for all. We will actively participate in global exchanges and dialogues on AI safety governance, and share experiences and best practices in risk identification, assessment, prevention and control. Actively assume social responsibility, strengthen science popularization and publicity, carry out skills training, improve AI literacy and skills, and help bridge the intelligence divide.
  • Commitment VII: Strengthen international cooperation on AI safety, security and governance, and promote inclusive, beneficial applications of AI. Actively participate in global dialogues on AI safety, security and governance, and contribute to the exchange of experiences and best practices in risk identification, assessment, and mitigation. Actively participate in global dialogues on AI safety, security and governance, and contribute to the exchange of experiences and best practices in risk identification, assessment, and mitigation. Fulfill social responsibilities by advancing public science communication, enhancing AI education, and providing skills training to improve AI literacy and capabilities, with a focus on bridging the global intelligence divide.
statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Former Google CEO: Open source has become an important feature in AI development

2025-7-30 12:02:39

Information

Google agrees to sign EU's General AI Code of Conduct despite reservations, after Meta refuses to do so

2025-7-31 11:38:11

Search