OpenAI’s Head of Preparedness will have to create a preparedness framework by identifying threats.
Photo Credit: Reuters
OpenAI CEO Sam Altman called the Head of Preparedness a critical role
OpenAI is looking for a Head of Preparedness who will help the artificial intelligence (AI) company anticipate threats that can emerge from its AI models and plan ways to mitigate them. The company is also willing to pay a massive payout for the role in both cash and equity. OpenAI CEO Sam Altman called the position critical to the company's functioning, highlighting that the employee will be focused on evaluating frontier models before they are released publicly. Notably, this comes at a time when the company is facing multiple lawsuits alleging ChatGPT's role in encouraging users to commit murder and suicide.
In a new listing on its careers page, the AI giant said it is looking for a Head of Preparedness. It is a senior role in the Safety Systems team, which “ensures that OpenAI's most capable models can be responsibly developed and deployed.” The role is based in San Francisco and comes with a listed annual salary of up to $555,000 (roughly Rs. 4.99 crore) plus equity.
OpenAI CEO shared the listing on X (formerly known as Twitter), noting that the position is critical as model capabilities continue to grow quickly. In his post, Altman described it as a “stressful job” that will involve diving into complex safety challenges immediately upon joining the company.
The official job description says the Head of Preparedness will lead the technical strategy and execution of the company's Preparedness framework, which is part of the broader Safety Systems organisation. The role will be responsible for building and coordinating capability evaluations, establishing detailed threat models, and developing mitigations that form “a coherent, rigorous, and operationally scalable safety pipeline.”
Core responsibilities listed in the job description include designing precise and robust capability evaluations for rapid model development cycles, creating threat models across multiple risk domains, and overseeing mitigation strategies that align with identified threats. The role demands deep technical judgement and clear communication to guide complex work across the safety organisation.
OpenAI opened this new position at a time when its AI systems have come under scrutiny for allegedly causing unintended harmful impact on users and the broader world. Recently, ChatGPT was implicated in a teenager's suicide, and a 55-year-old man's murder-suicide. The company's AI browser, ChatGPT Atlas, is also said to be vulnerable to prompt injections.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.