%20(9).png)
Grok artificial intelligence, developed by xAI and integrated into the social network X, is at the center of an international controversy after the dissemination of AI-generated content deemed inappropriate and potentially illegal. Initially presented as an innovative conversational assistant, capable of generating images and text responses, Grok is accused of producing sexualized deepfakes involving women and minors, causing public outrage and the intervention of regulatory authorities.
The revelations emerged in early January 2026, when users exploited Grok's image-editing feature to create and post images depicting people, including children, in suggestive outfits or explicit poses without their consent. This content is a serious violation of ethical and legal standards, especially with regard to child sexual abuse material. In response, Grok issued an automatic apology recognizing “gaps in safeguards” and saying that corrective measures were in progress.
The controversy quickly went beyond social platforms. Governments and regulators in France, the United Kingdom, the United Kingdom, India and the European Union have opened investigations or required strict measures. In France, the Paris prosecutor's office was alerted to content deemed illegal, while the United Kingdom's Women and Equalities Committee suspended the use of X, pointing to the platform's inability to prevent harmful content. India called for a detailed action plan to remove this content and warned that X could lose its legal immunity if it did not act.
The controversy exposes the limits of liability of generative AI platforms. While laws such as Section 230 of the Communications Decency Act provide some protections for online service providers, the active generation of problematic content by AI raises new questions about these protections. The European Union is considering a review under the Digital Services Act, which could result in significant penalties for repeated or serious violations.
The Grok case illustrates the contemporary challenges associated with moderating powerful artificial intelligences accessible to the general public. Existing security mechanisms are proving insufficient to prevent the creation of non-consensual deepfakes, fueling a global debate about the regulation of AI, the protection of individual rights, and the responsibility of technology companies in the face of harmful automated content.
Grok is an artificial intelligence developed by Xai, Elon Musk's company, integrated into the social network X, capable of generating text and images from prompt users.
A deepfake is audiovisual content manipulated by artificial intelligence to represent people in a realistic but non-authentic way, often used to deceive or create non-consensual images.
It shows that even powerful AIs that are available to the general public can be misused. It fuels the debate on the regulation, moderation and ethical responsibility of technology companies.
The controversy affects the reputation of Xai and Elon Musk, by highlighting the difficulty in controlling the abusive uses of generative technologies and the need to strengthen user trust.