OpenAI High-paid recruitment of "heads of combat readiness", annual salary, $555 million

On December 30th, according to The Business InsiderOpenAI A new “head of Preparedness” is being recruited, with an annual salary of $555 million (approximately RMB 3.89 million) and options。

OpenAI High-paid recruitment of "heads of combat readiness", annual salary, $555 million

On December 30, OpenAI reported that it was recruiting a new "head of Preparedness" with an annual salary of $555 million (approximately RMB 3.89 million) and options。

The position, which belongs to the Safety Systems team in OpenAI, is responsible for developing a coherent, rigorous and scalable security process for modelling capacity assessment, threat modelling and mitigation measures to limit the possible negative effects of artificial intelligence。

OpenAI CEO Sam Altman recently wrote on X that this is a post of "playing a key role at a critical moment" and that modeling is rapidly increasing, not only for many valuable tasks, but also for real challenges。

In particular, he noted that the potential impact of large models on mental health had been "forecasted" this year, while in the area of cybersecurity, the models had "better than beginning to identify critical gaps", which meant that preventing the misuse of those capabilities would be one of the core tasks of the post。

In its post, Altman states that OpenAI has a more sophisticated capability measurement system, but then needs to understand more carefully how these capabilities may be misused and to design effective restraint mechanisms in the product and the real world to minimize the risks to society in enjoying the enormous benefits of AI。

He stressed that the job would be "very stressful" and that the new director would "almost immediately be thrown into deep water". He also called on those interested in topics such as front-line cybersecurity defence, biosafety and self-improvement of system security to consider recruitment。

According to the job description, the new head of preparedness will be directly responsible for the assessment system and the protection strategy, and the team will be coordinated to build tests and security pipelines for different threat scenarios。

The post is considered to be the hub for OpenAI on the security of the model's "pre-governance": understanding and predicting the boundaries of the model's capabilities and designing operational preventive mechanisms to avoid the use of the model for high-risk uses such as cyber-invasion, biological threats, etc。

OpenAI has been claiming that its core mission is "to develop artificial intelligence in a way that benefits all of humanity" and to view security agreements as a core part of its operations from the early days of the company. However, as products are introduced and commercial pressure rises, some former employees have publicly questioned the balance between safety and profitability。

A FORMER EMPLOYEE WHO WAS IN CHARGE OF AGI-RELATED SECURITY RESEARCH REPORTED IN A BLOG PUBLISHED LAST YEAR THAT HE "IS LOSING CONFIDENCE IN THE COMPANY'S ABILITY TO ACT RESPONSIBLY WHEN APPROACHING AGI" AND POINTED TO A SIGNIFICANT LOSS OF SECURITY TEAM PERSONNEL OVER THE PAST YEAR。

OpenAI was also concerned about the use of ChatGPT in mental health settings. With the proliferation of chat robots among consumers, many users view them as "alternative objects of talk", which in part exacerbates mental health problems in some cases。

Last October, OpenAI indicated that it was working with mental health professionals to improve the way ChatGPT interacts with users at risk of self-inflicted injuries, delusions, etc. to reduce the harm that models can cause in sensitive settings。

In this context, on the one hand, OpenAI tried to useHigh salary(b) More explicit authority and responsibility to attract high-level talent with a secure, cyber-protective or biological risk assessment background

ON THE OTHER HAND, IT IS ALSO SEEN AS A SIGNAL TO RESPOND TO EXTERNAL QUESTIONS AND REAFFIRM SECURITY PRIORITIES: WHILE HIGH-INTENSITY COMMERCIALIZATION ADVANCES, THE SOCIAL RISKS THAT MAY ARISE FROM HEDGE AI WILL BE ADDRESSED THROUGH MORE SYSTEMATIC ASSESSMENT AND PROTECTION MECHANISMS。

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Meta, a multi-billion-dollar purchaser of Manus, founder of Shaw Hong, Vice President

2025-12-30 12:12:34

Information

"The first piece of the Global Large Model" kick-starts with a spectrometer

2025-12-30 12:15:03

Search