This post is also available in:
עברית (Hebrew)
A security lapse involving Vyro AI, the company behind well-known generative AI apps, has exposed sensitive user data from millions of users. The breach, identified by researchers from Cybernews, revealed that a misconfigured server was leaking a massive 116GB of live user logs, putting personal and authentication details at risk.
The exposed data came from Vyro AI’s suite of apps, including the widely downloaded ImagineArt, Chatly, and Chatbotx. ImagineArt alone boasts over 10 million downloads, while Vyro AI claims a total of 150 million downloads across its portfolio. This scale of exposure raises serious concerns about the vulnerability of user information in the rapidly growing AI sector.
According to Cybernews, Among the leaked data were AI prompts users inputted, authentication tokens, and user agents. The leakage of bearer tokens is particularly alarming, as they could allow attackers to take over accounts, access chat histories, or even make unauthorized AI-related transactions, such as purchasing credits fraudulently.
The breach also included valuable details such as the AI-generated prompts submitted by users, which may contain sensitive or personal information. These conversations with AI often involve private data, and their exposure could reveal more than users intended to share. Given that AI tools are frequently used for creative, business, or even confidential tasks, the loss of this data poses serious privacy risks.
The server was left unsecured for several months, visible to anyone scanning for open databases since at least February 2025. This extensive exposure means the breach could have been exploited by attackers for a prolonged period before it was discovered in April 2025.
This incident underscores a rise in AI security issues. As the industry expands, some companies have been criticized for overlooking robust security measures in their rush to market. This breach joins other recent incidents in the AI sector, such as the leak of user conversations from ChatGPT and Grok, highlighting the ongoing risks of poorly protected AI infrastructure.
While there are currently no details about the timeline for resolving the issue, this breach serves as a stark reminder of the need for stronger security protocols in the AI space, especially as millions of users continue to rely on these tools.