Zoë Hitzig, who spent two years at OpenAI shaping AI models and safety policies, has resigned following the company's announcement to test ads on ChatGPT. The researcher warns about risks of user manipulation.
OpenAI has started testing advertising on ChatGPT this week, a decision that has triggered the resignation of Zoë Hitzig, a researcher who spent two years at the company working on AI model development, pricing strategies, and defining early safety policies before standards were established.
Hitzig does not consider advertising immoral, acknowledging that AI is expensive and ads can be a necessary revenue source. However, she questions the adopted strategy. For years, users have entrusted ChatGPT with unprecedented personal information, revealing medical fears, relationship problems, and beliefs about God and the afterlife, trusting the tool had no hidden agenda. Building an advertising model on this conversational basis creates, according to Hitzig, a manipulation potential that is currently neither understood nor preventable.
The former researcher draws a parallel with Facebook, which initially promised users would control their data. These commitments eroded under pressure from an advertising model that prioritized engagement. OpenAI has stated its ads will be clearly labeled, appear at the bottom of responses, and won't influence content. Hitzig believes the first version will likely follow these rules, but fears the company is building an economic engine that creates incentives to override its own principles.
The researcher points out that erosion of principles may already be underway. Although it goes against OpenAI's principles to optimize engagement solely to generate advertising revenue, it has been reported that the company already optimizes for daily active users, making the model more flattering. This optimization can increase dependence on AI, with documented consequences including chatbot-related psychosis episodes and allegations that ChatGPT reinforced suicidal ideation in some users.
Advertising revenue can help ensure access doesn't remain limited to those who can pay. ChatGPT has 800 million weekly users and premium subscriptions cost between $200 and $250 per month. Hitzig argues the real question isn't ads yes or no, but whether structures can be designed to avoid both excluding people and manipulating them. And she believes it is possible.
Facing the apparent dilemma between restricting access or accepting advertising, Hitzig proposes alternatives. One option is cross-subsidies: companies using AI for high-value work would pay a surcharge to subsidize free access. Another involves accepting advertising but with real governance, including independent oversight over personal data use. A third option would be placing user data under independent control through trusts or cooperatives with legal duty to act in users' interests.
Hitzig concludes there is time to implement these options and avoid a technology that manipulates those who use it for free or benefits only those who can afford to pay.
ChatGPT helps you get answers, find inspiration and be more productive. It is free to use and easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming and more. ChatGPT is a ...
OpenAI develops artificial intelligence with a focus on safety and social benefit. The company integrates advanced research and ethical principles to drive general-purpose AI ...
02/03/2026
Anthropic rejects removing two restrictions on its AI's use by the military, in a conflict that led Trump to order its removal from all federal ...
25/02/2026
Perplexity introduces Computer, an AI agent capable of creating and executing complete workflows for hours or months, autonomously coordinating ...
23/02/2026
Anthropic has identified large-scale campaigns by DeepSeek, Moonshot and MiniMax to fraudulently extract capabilities from its Claude model and use ...
21/02/2026
The AI Impact Summit 2026 concludes with a voluntary agreement signed by 88 countries that lays the groundwork for international cooperation on the ...