Researcher quits OpenAI questioning its advertising strategy

11/02/2026

Zoë Hitzig, who spent two years at OpenAI shaping AI models and safety policies, has resigned following the company's announcement to test ads on ChatGPT. The researcher warns about risks of user manipulation.

Researcher quits OpenAI questioning its advertising strategy

OpenAI has started testing advertising on ChatGPT this week, a decision that has triggered the resignation of Zoë Hitzig, a researcher who spent two years at the company working on AI model development, pricing strategies, and defining early safety policies before standards were established.

Hitzig does not consider advertising immoral, acknowledging that AI is expensive and ads can be a necessary revenue source. However, she questions the adopted strategy. For years, users have entrusted ChatGPT with unprecedented personal information, revealing medical fears, relationship problems, and beliefs about God and the afterlife, trusting the tool had no hidden agenda. Building an advertising model on this conversational basis creates, according to Hitzig, a manipulation potential that is currently neither understood nor preventable.

The former researcher draws a parallel with Facebook, which initially promised users would control their data. These commitments eroded under pressure from an advertising model that prioritized engagement. OpenAI has stated its ads will be clearly labeled, appear at the bottom of responses, and won't influence content. Hitzig believes the first version will likely follow these rules, but fears the company is building an economic engine that creates incentives to override its own principles.

The researcher points out that erosion of principles may already be underway. Although it goes against OpenAI's principles to optimize engagement solely to generate advertising revenue, it has been reported that the company already optimizes for daily active users, making the model more flattering. This optimization can increase dependence on AI, with documented consequences including chatbot-related psychosis episodes and allegations that ChatGPT reinforced suicidal ideation in some users.

Advertising revenue can help ensure access doesn't remain limited to those who can pay. ChatGPT has 800 million weekly users and premium subscriptions cost between $200 and $250 per month. Hitzig argues the real question isn't ads yes or no, but whether structures can be designed to avoid both excluding people and manipulating them. And she believes it is possible.

Facing the apparent dilemma between restricting access or accepting advertising, Hitzig proposes alternatives. One option is cross-subsidies: companies using AI for high-value work would pay a surcharge to subsidize free access. Another involves accepting advertising but with real governance, including independent oversight over personal data use. A third option would be placing user data under independent control through trusts or cooperatives with legal duty to act in users' interests.

Hitzig concludes there is time to implement these options and avoid a technology that manipulates those who use it for free or benefits only those who can afford to pay.

Key points

  • Zoë Hitzig resigns from OpenAI after two years defining safety policies and developing models.
  • She resigns due to deep reservations about OpenAI's advertising strategy.
  • Users have entrusted ChatGPT with intimate information about health, relationships, and religious beliefs.
  • Building advertising on that data allows exploiting deep fears and desires to sell products in ways that cannot be prevented.
  • She draws a parallel with Facebook's erosion of privacy commitments.
  • OpenAI already optimizes to increase active users, which increases AI dependence.
  • Serious consequences have been documented including chatbot psychosis and reinforcement of suicidal ideation.
  • Hitzig says the problem isn't ads yes or no, but designing structures that avoid excluding people or manipulating them.
  • She proposes alternatives: cross-subsidies, independent governance, or user data control through trusts.

Related AI

ChatGPT

The AI assistant

ChatGPT helps you get answers, find inspiration and be more productive. It is free to use and easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming and more. ChatGPT is a ...

OpenAI

Responsible AI Research and Development

OpenAI develops artificial intelligence with a focus on safety and social benefit. The company integrates advanced research and ethical principles to drive general-purpose AI ...

Lastest news

Trustpilot
This website uses technical, personalization and analysis cookies, both our own and from third parties, to facilitate anonymous browsing and analyze website usage statistics. We consider that if you continue browsing, you accept their use.