Users’ API metadata exposed, company warns of potential phishing attacks
OpenAI confirmed Wednesday that a recent security incident at analytics provider Mixpanel exposed account information for some users of its API.
While the breach did not affect ChatGPT users accessing the platform directly, the incident has raised concerns about phishing and social engineering attacks targeting affected accounts.
What happened
According to Mixpanel, an unknown attacker gained access to part of its systems on November 8, 2025, exporting a dataset containing customer-identifiable metadata and analytics information. The stolen data included account names, email addresses, approximate browser-based location, operating system, and browser details.
OpenAI said that no prompts, API keys, authentication tokens, passwords, or payment information were compromised.
Only users accessing OpenAI technology via the API, through third-party apps powered by GPT models were affected. OpenAI clarified that those using ChatGPT directly on the website were not impacted.
“OpenAI has terminated its use of Mixpanel as part of the response to this incident,” the company said in a statement, adding that it is working closely with partners to fully understand the scope of the breach and to notify affected users and organizations.
Immediate response and mitigation
Following the breach, Mixpanel secured impacted accounts, revoked active sessions, rotated compromised credentials, and blocked malicious IP addresses. Employee passwords were reset, and external cybersecurity firms were brought in to review authentication, session, and export logs.
Mixpanel began notifying customers directly about the incident.
“If you have not heard from us directly, you were not impacted,” Mixpanel CEO Jen Taylor said, emphasizing the company’s commitment to transparency and security.
OpenAI also implemented its own measures. The company removed Mixpanel from production services, reviewed affected datasets, and launched expanded security reviews across its vendor ecosystem.
Users were advised to enable multi-factor authentication and remain vigilant against phishing attempts, particularly “smishing” attacks conducted through SMS messages, which accounted for 39% of mobile threats in 2024, according to cybersecurity firm Spacelift.
Risks and user guidance
Although the stolen data was limited to metadata, experts warn it could still be exploited in targeted phishing campaigns. Names, email addresses, and approximate locations could allow attackers to craft credible-looking messages.
OpenAI advised affected users to be cautious of emails, text messages, or other communications that request passwords, API keys, or verification codes, and to verify that any communication claiming to be from OpenAI comes from official domains.
“This incident demonstrates that modern AI ecosystems are not self-contained fortresses, but rely on a complex network of often unregulated third-party vendors,” said David Schwed, COO of AI security firm SovereignAI.
“A security gap at a peripheral link like Mixpanel can compromise the entire stack, affecting users who trust the ecosystem.”
Transparency and accountability
OpenAI emphasized that the breach was not a compromise of its own systems. Users’ chats, API requests, or personal credentials remained secure. However, the incident highlights the risks inherent in relying on third-party services to manage analytics and user data.
“We are committed to transparency, and are notifying all impacted customers and users,” OpenAI said.
“We also hold our partners and vendors accountable for the highest bar for security and privacy of their services.”
Some API users expressed frustration on social media that a third-party service had access to their personal information.
“OpenAI sending names and emails to a third party analytics platform (Mixpanel) feels wildly irresponsible,” one user wrote, pointing out a broader concern about data governance in AI ecosystems.
While no sensitive credentials were exposed, OpenAI’s decisive response, including severing ties with Mixpanel and reviewing vendor security practices, illustrates the challenges of securing complex AI platforms.
“The security and privacy of our products are paramount,” the company said, reaffirming its commitment to protecting user information and maintaining trust in its AI ecosystem.
OpenAI continues to monitor the situation and urges API users to remain alert.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource, and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice. HODL FM strongly recommends contacting a qualified industry professional.




