The European Commission has launched an investigation into Elon Musk’s social media platform X over concerns that its AI chatbot, Grok, generated and disseminated non-consensual sexual deepfakes, including images of women and minors. The probe will assess whether X complied with the EU’s Digital Services Act (DSA), which mandates strict measures to prevent illegal and harmful content.
Immediate Action & Core Facts
The investigation follows global outrage after Grok allowed users to manipulate images of women and children, including generating sexually explicit deepfakes. The EU’s tech chief, Henna Virkkunen, condemned the content as "violent and unacceptable." X has since restricted image editing features and blocked certain functionalities in jurisdictions where they are illegal, but regulators argue these steps do not fully address systemic risks.
Deeper Dive & Context
Regulatory Concerns and Legal Risks
The EU’s Digital Services Act imposes fines of up to 6% of a company’s global turnover for violations. The Commission is examining whether X adequately assessed risks before deploying Grok in Europe. A senior EU official stated that the changes made by X do not resolve all issues, prompting the formal investigation.
Global Backlash and Policy Responses
Several countries, including Indonesia, the Philippines, and Malaysia, temporarily blocked Grok after the deepfakes emerged. The UK’s Ofcom also launched an inquiry. EU Commission President Ursula von der Leyen emphasized that the bloc will not tolerate "digital undressing" of women and children, calling the harm caused by such content "very real."
X’s Response and Industry Implications
X directed inquiries to a January 14 statement where it claimed to have restricted image editing and blocked users from generating illegal content. However, critics argue the measures are insufficient. EU lawmaker Regina Doherty highlighted broader weaknesses in AI regulation, stating that "no company operating in the EU is above the law."
Potential Consequences and Future Steps
The investigation could escalate tensions between the EU and the U.S., where some officials have criticized EU tech regulations. The outcome may set precedents for AI governance, particularly regarding content moderation and risk mitigation. The EU’s actions signal a firm stance on protecting fundamental rights in the digital age.