This post is also available in:
עברית (Hebrew)
A recent incident involving Elon Musk’s xAI chatbot, Grok, has exposed over 370,000 user conversations through public search engines like Google. The conversations, many of which include personal or sensitive data, were indexed and made publicly accessible without users’ clear awareness.
The case comes a mere month after a similar issue occurred with Open AI’s ChatGPT, were personal conversations started appearing in google search results. Similarly, the issue here stems from the chatbot’s “share” feature, which generates a unique URL for each conversation. Intended to allow users to distribute AI-generated content via email or messaging, the feature also makes these links accessible to web crawlers, enabling indexing by search engines such as Google, Bing, and DuckDuckGo. The two cases suggest a recurring design oversight in how AI platforms manage public sharing tools.
According to an investigation by Forbes, some indexed chats contain names, passwords, documents, spreadsheets, and image files. While many interactions appear routine — such as composing tweets or business content — others reportedly involved sensitive or controversial queries, including requests for drug manufacturing instructions, malware coding, and even bomb construction guides.
These examples highlight a significant vulnerability in how shared AI conversations are handled online. Notably, several of the indexed interactions were in violation of xAI’s own usage policies, which explicitly prohibit promoting violence, the development of WMDs, or any activity that critically harms human life.
Security researchers have warned that these exposed conversations could be used for malicious purposes. Personally identifiable information leaked through this incident could expose users to harassment or doxxing, as well as fraud.
As generative AI tools become increasingly integrated into everyday tasks, the incident highlights the need for stricter controls and clearer user warnings regarding content sharing. Until such measures are implemented, the potential for unintended exposure of sensitive conversations remains a real and present risk.