Dave Willner, OpenAI’s trust and safety lead, has announced his departure from the position through a LinkedIn post. While he will remain in an advisory role, he cited the decision to spend more time with his family as the reason for stepping down. Willner explained that the high-intensity phase in OpenAI’s development and his responsibilities as a parent of young children created a challenging balance.
Despite leaving his position, Willner expressed pride in the accomplishments of OpenAI during his tenure, noting it was one of the most interesting and coolest jobs in the world.
The announcement comes amid legal challenges faced by OpenAI, particularly concerning its signature product, ChatGPT. The Federal Trade Commission (FTC) recently launched an investigation into OpenAI over concerns of potential violations of consumer protection laws and engaging in practices that may harm privacy and security. The investigation includes incidents like a bug that leaked users’ private data, which falls under the domain of trust and safety.
Willner emphasized that his decision was a clear and public one, with the hope of normalizing open discussions about work-life balance. The departure comes at a time when concerns about AI safety are increasing, prompting OpenAI and other companies to implement safeguards at the request of the White House. These safeguards include allowing independent experts access to code, addressing societal risks like biases, sharing safety information with the government, and watermarking AI-generated audio and visual content to ensure transparency.