If you’ve ever had a conversation with AI assistant Grok, there’s a chance it’s available on the internet, for all to see.
According to a report by Forbes, Elon Musk’s AI assistant published more than 370,000 chats on the Grok website. Those URLs, which weren’t necessarily intended by users for public consumption, were then indexed by search engines and entered the public sphere.
While many of the conversations with Grok were reportedly benign, some were explicit and appeared to violate its own terms of service, including offering instructions on manufacturing illicit drugs like fentanyl and methamphetamine, constructing a bomb, and methods of suicide.
It wasn’t just chats. Forbes reported that uploaded documents, such as photos, spreadsheets and other documents, were also published.
Representatives for xAI, which makes Grok, didn’t respond to a request for comment.
The publishing of Grok conversations is the latest in a series of troubling reports that should spur chatbot users to be overly cautious about what they share with AI assistants. Don’t just gloss over the Terms and Conditions, and be mindful of the privacy settings.
Earlier this month, 404 Media reported on a researcher who discovered more than 130,000 chats with AI assistants Claude, Chat GPT and others were readable on Archive.org.
When a Grok chat is finished, the user can hit a share button to create a unique URL, allowing the conversation to be shared with others. According to Forbes, “hitting the share button means that a conversation will be published on Grok’s website, without warning or a disclaimer to the user.” These URLs were also made available to search engines, allowing anyone to read them.
There is no disclaimer that these chat URLs will be published for the open internet. But the Terms of Service outlined on the Grok website reads: “You grant, an irrevocable, perpetual, transferable, sublicensable, royalty-free, and worldwide right to xAI to use, copy, store, modify, distribute, reproduce, publish, display in public forums, list information regarding, make derivative works of, and aggregate your User Content and derivative works thereof for any purpose…”
But there is a measure of good news for users who accidentally hit the share button or were unaware their queries would be shared far and wide. Grok has a tool to help users manage their chat histories. Going to https://grok.com/share-links will present a history of your shared chats. Simply click the Remove button to the right of each chat to delete it from your chat history. It’s wasn’t immediately clear if that would have any affect on what’s already indexed in search engines.
Protect your privacy
E.M Lewis-Jong, director at the Mozilla Foundation, advises chatbot users to keep a simple directive in mind: Don’t share anything you want to keep private, such as personal ID data or other sensitive information.
“The concerning issue is that these AI systems are not designed to transparently inform users how much data is being collected or under which conditions their data might be exposed,” Lewis-Jong says. “This risk is higher when you consider that children as young as 13 years old can use chatbots like ChatGPT.”
Lewis-Jong adds that AI assistants such as Grok and ChatGPT should be clearer about the risks users are taking when they use these tools.
“AI companies should make sure users understand that their data could end up on public platforms.,” Lewis-Jong says. “AI companies are telling people that the AI might make mistakes — this is just another health warning that should also be implemented when it comes to warning users about the use of their data.”
According to data from SEO and thought leadership marketing company First Page Sage, Grok has 0.6% of market share, far behind leaders ChatGPT (60.4%), Microsoft Copilot (14.10%) and Google Gemini (13.5%).