Baltimore Mayor Brandon Scott said in an emailed statement to CNBC that the deepfakes on Grok “have traumatic, lifelong consequences for victims.”
“We’re talking about tech companies enabling the sexual exploitation of children,” Scott wrote. “Our city will not stand by and allow this to continue; it’s a threat to privacy, dignity, and public safety, and those responsible must be held accountable.”
Now part of SpaceX after a merger last month, xAI faces regulatory probes in several countries after Grok allowed the mass creation of so-called deepfake porn based on images of non-consenting women and children. Last week, attorneys representing three teenagers in Tennessee filed a proposed class-action lawsuit against xAI after Grok generated content depicting them in sexualized and debasing scenarios.
In the latest suit, filed in a circuit court on March 24, the mayor and City Council of Baltimore accused xAI of violating the city’s consumer protection laws and engaging in deceptive and unfair trade practices, namely by marketing Grok and X, formerly known as Twitter, as generally safe for users.
The complaint refers to a “put her in a bikini” trend that encouraged Grok users to take photos of others and nudify them. Musk, who controls SpaceX and is also CEO of Tesla, participated in the trend, sharing an image created with Grok depicting him in a string bikini.
The city is seeking “the maximum amount of statutory penalties available,” but did not list a specific amount in its complaint. It’s also asking for “injunctive relief” to force Musk’s company to make changes to X and Grok to curb the creation of what researchers refer to as non-consenting intimate images, or NCII, and child sexual abuse material, or CSAM.
Baltimore wants the court to order X and xAI to “cease the targeting and exploitation of Baltimore’s residents, … reform their exploitative platform design” and revise their marketing.
Executives at SpaceX and xAI didn’t immediately respond to a request for comment.
In a report published on Tuesday, the Internet Watch Foundation, a U.K.-based charity, said that girls remain overwhelmingly targeted by CSAM, and were the targets of 97% of illegal AI-generated sexualized images assessed by the organization in 2025.
WATCH: SpaceX’s deal to acquire xAI
Discover more from InfoVera USA
Subscribe to get the latest posts sent to your email.