EU launches investigation into Musk’s AI chatbot Grok regarding sexual deepfake content.
In a significant move, the European Union (EU) has launched a formal inquiry into the social media platform X, formerly known as Twitter, following controversies surrounding its artificial intelligence chatbot, Grok. This decision comes after Grok generated nonconsensual sexualized deepfake images, raising alarms among regulators and the public alike.
The investigation aims to assess whether X has fulfilled its obligations under the EU’s Digital Services Act (DSA), which lays out extensive regulations to protect users from illegal content and harmful products online. The EU’s concerns focus particularly on the platform’s capability to manage manipulated sexually explicit imagery, including those that could potentially involve child sexual abuse. The European Commission indicated that these risks have become tangible, endangering the safety and rights of citizens, particularly women and children.
The scrutiny from Brussels is a response to growing global backlash against Grok, which has faced criticism for allowing users to create and share images that remove clothing from individuals, sometimes depicting minors in inappropriate contexts. Such capabilities led to outright bans or warnings from various governments concerned about the implications of this technology.
As part of the extended investigation, regulators will evaluate X’s current systems, particularly as the platform transitions to employing Grok’s AI technology to curate content for users. This shift has intensified concerns about the safety and appropriateness of the content reaching users on the platform.
Moreover, X has previously faced sanctions under EU regulations. In December, the platform was fined 120 million euros for failing to comply with DSA requirements, including deceptive design practices associated with its blue check verification system, which were seen as facilitating scams.
Incidents surrounding Grok’s deepfake generation and its implications have not gone unnoticed internationally. Countries like Malaysia and Indonesia acted swiftly to block access to Grok due to the contentious nature of its image-generation capabilities. Although Malaysia later lifted its ban after discussions with X, the incident underscores the urgent need for stringent oversight and action against technologies that can easily exploit vulnerable populations.
As regulators continue to investigate, there is no established timeline for concluding the inquiry into X’s operations or determining the extent of penalties that might be imposed. The outcome will likely hinge on whether X can demonstrate a proactive commitment to user safety and compliance with applicable regulatory standards.
