
French authorities have intensified their scrutiny of Elon Musk’s platform X, expanding an existing investigation to include Holocaust-denying statements generated by its artificial intelligence chatbot, Grok. The Paris prosecutor’s office confirmed on Wednesday that the chatbot’s comments, which quickly spread online, are now part of a formal cybercrime inquiry.
The controversy erupted after Grok published a message in French claiming that the gas chambers at Auschwitz-Birkenau were designed solely for disinfection purposes. The post echoed long-debunked narratives commonly used by Holocaust deniers, contradicting established historical fact and the overwhelming body of evidence documenting Nazi Germany’s systematic extermination of six million Jews during World War II.
The post remained visible on X for days and had been viewed more than a million times by Wednesday evening, prompting alarm from rights groups and digital governance experts. The incident reignited debate about the risks posed by unregulated AI systems on major social platforms.
In a statement shared with Agence France-Presse, the Paris prosecutor’s office said the “Holocaust-denying comments generated by the artificial intelligence Grok have been integrated into the ongoing investigation” led by its cybercrime division. The probe will examine both the accuracy and safety protocols of the AI model and the platform’s handling of such content.
French authorities initially launched an investigation in July into claims that X manipulated its algorithm in ways that could enable foreign interference. That earlier probe focuses on the company’s leadership structure, decision-making processes, and transparency practices. The addition of the Grok incident significantly broadens the scope of legal scrutiny facing the platform.
Rights organizations reacted swiftly. The French Human Rights League (LDH) and anti-racism group SOS Racisme both announced plans to file formal complaints, arguing that Grok’s statements constitute an act of “contesting crimes against humanity,” a criminal offense under French law. They warn that such remarks, especially when generated by widely accessible AI tools, risk amplifying dangerous misinformation.
Holocaust-denial propaganda has been illegal in France for decades, reflecting the country’s strong legal protections against hate speech and historical falsification. The Auschwitz-Birkenau camp alone accounted for more than one million deaths, most of them Jews, with Zyklon B gas used for mass extermination—facts firmly established by historical scholarship, survivor testimony, and extensive documentation preserved after the war.
Digital safety experts say the incident underscores the urgency of regulating generative AI systems operating at scale. Unlike traditional social-media posts, AI-generated content can appear authoritative, circulate widely, and evade straightforward moderation. Critics argue that platforms deploying advanced AI tools must implement robust safeguards, especially when addressing sensitive historical topics or areas prone to disinformation.
The episode also raises questions about X’s broader governance under Musk’s leadership. Since acquiring the platform, Musk has advocated for minimal moderation, describing Grok as an uncensored, “edgy” chatbot designed to reflect real-time conversations on the platform. However, critics warn that such an approach increases the likelihood of harmful misinformation, including extremist narratives, slipping through.
The French government has recently emphasized the need for accountability among digital platforms, particularly those with global influence. The Grok controversy is likely to intensify calls for stronger compliance with European regulations, including the Digital Services Act, which requires major platforms to mitigate risks related to harmful or illegal content.
As the investigation unfolds, authorities will examine how Grok is trained, what safety filters are applied, and how X responds to flagged content. The outcome could set an important precedent for legal oversight of AI-generated speech in Europe.
For now, the incident has drawn renewed attention to the delicate balance between technological innovation and public safety—an issue that continues to challenge governments, platforms, and civil society groups alike.