X includes user data in its AI training: European authorities investigate | Festina Lente - Your leading source of AI news | Turtles AI
Highlights:
- X includes user data in the training pool for Grok.
- Irish DPC expresses surprise and requests clarifications.
- Need for a valid legal basis for data processing in Europe.
- Parallels with Meta’s case and its regulatory implications.
X quietly pushed a change to include user data in its AI training pool for Grok, drawing attention from European privacy authorities.
X, formerly known as Twitter, recently changed its settings to automatically include user data in the training pool for Grok, its conversational AI developed by Elon Musk. This move, noticed by platform users, has caught the attention of the Irish Data Protection Commission (DPC), which expressed surprise and requested clarifications from X.
Grok, a large language model (LLM), was developed by X to compete with OpenAI’s ChatGPT, with a focus on being less politically correct and more humorous. However, the implementation of this AI has raised concerns among users about their data being used by Musk. For those wishing to opt out of this feature, X has provided instructions on how to do so.
The DPC stated it was surprised by X’s move, having already interacted with the platform on this issue in the preceding months. Graham Doyle, deputy commissioner of the DPC, confirmed to TechCrunch that the commission is awaiting a response from X and expects further developments early next week.
The DPC oversees X’s compliance with the European Union’s General Data Protection Regulation (GDPR), a law that allows for fines of up to 4% of global annual turnover for confirmed breaches. The notice accompanying the Grok data-sharing setting reads: "Allow your posts, as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning." Smaller print adds that such data may be shared with service provider xAI for these purposes.
The ambiguous language does not clarify whether X is using all user data for Grok’s training or only those from interactions with the chatbot, which is available to X premium subscribers. Either way, in Europe, the company needs a valid legal basis for processing people’s data, as required by the bloc’s privacy laws, but it is unclear if X has one.
A similar plan by Meta to use Facebook and Instagram user data for AI training was paused in Europe last month after GDPR complaints led to regulatory scrutiny in Ireland and the UK.
The Data Protection Commission expects further developments on the Grok data-sharing issue next week. Meanwhile, we contacted X to inquire about the legal basis for processing Europeans’ data to train Grok, but as of writing, the company’s press email responded with the standard automated line: "Busy now, please check back later."
These developments highlight the importance of transparent and compliant data management in AI development. X’s case could become an important study for the future of AI regulations in Europe.