OpenAI has disclosed a security incident at analytics provider Mixpanel that exposed limited profile data of ChatGPT API users. The breach, confirmed in a transparency blog post, affected names, email addresses, and basic analytics but spared sensitive information like chat logs, passwords, or API keys.
Attacker Exports Mixpanel Dataset; OpenAI Terminates Vendor Relationship
An unauthorized actor accessed Mixpanel’s systems earlier this month, exporting a dataset notified to OpenAI on November 25. Exposed details included API account names, emails, approximate locations, OS/browser info, referring sites, and user/organization IDs—strictly limited to API product users, not consumer ChatGPT accounts.
OpenAI immediately severed ties with Mixpanel, removed it from production services, and launched expanded vendor security audits with elevated requirements across its ecosystem. All impacted organizations, admins, and users receive direct notifications.
No Chat Logs or Sensitive Data Compromised; Phishing Risks Heightened
Crucially, no core OpenAI systems were breached, and no payment details, government IDs, or conversation histories were involved. However, the leaked profile data raises phishing and social engineering risks, prompting OpenAI’s urgent user guidance.
OpenAI’s Security Recommendations Post-Breach
Users should:
- Scrutinize unexpected emails/links claiming OpenAI origin
- Verify sender domains match official OpenAI addresses
- Never share passwords, API keys, or codes via email/text/chat
- Enable multi-factor authentication everywhere
OpenAI reaffirms commitment to trust, transparency, and holding vendors accountable amid rising third-party risks in AI services.
