Elon Musk’s controversial Grok artificial intelligence model appears to be limited in some apps, but largely unchanged in others.
Musk’s social media app, X, has Grok AI image generation capabilities available only to paying customers, and appears to be restricting the creation of sexual deepfakes following a wave of backlash from users and regulators. However, the Grok standalone app and website will still allow users to use AI to remove clothing from images of people who have not consented.
Early Friday morning, X’s Grok reply bot, which had previously been fielding a large number of requests that directed unsuspecting people into sexual contexts or revealing clothing, began replying to user requests with text such as “Image generation and editing is currently limited to paid subscribers. You can subscribe to unlock these features,” along with a link to a page to purchase an X premium account.
Looking at the responses from the X Reply bot on Friday morning, the trend for sexual images appears to have decreased dramatically. Grok appears to have largely stopped producing sexually explicit images of identifiable people on X.
However, in the standalone Grok app, the AI model continued to comply with requests to force non-consenting individuals into revealing clothing, such as swimsuits and underwear.
NBC News asked Grok, in its standalone app and website, to convert a series of photos of clothed people who consented to testing. Grok responded to a request to take a fully clothed person into a more revealing swimsuit and place it in a sexual context in a standalone app.
At this time, it is not clear what the scope and parameters of the changes will be. Mr. X and Mr. Musk have not issued a statement regarding the change. On Sunday, before the changes were made and in the face of growing backlash, both Musk and Mr.
The move comes after X has been flooded with sexual and non-consensual images generated by xAI’s Grok AI tool in recent days, as users encouraged the system to undress photos of people, mostly women, without their consent.
In most of the sexualized images created by Grok, the person wears more revealing clothing, such as a bikini or underwear. In some of the images viewed by NBC News, users successfully encouraged Grok to force people to wear transparent or translucent underwear, essentially leaving them naked. On Sunday, Ashley St. Clair, the mother of one of Mr. Musk’s children, began posting about the issue after a user ordered Mr. Grok to sexualize images of her, including images of her as a minor.
The changes to X are a dramatic departure from the social media site’s trajectory just a day earlier, when the number of sexualized AI images posted to X by Grok was increasing, according to an analysis conducted by deepfake researcher Genevieve Oh. On Wednesday, Grok generated 7,751 sexual images in one hour. An analysis of the bot’s output shows an increase of 16.4% from Monday’s 6,659 tickets per hour.
Oh is an independent analyst specializing in deepfakes and social media research. Since December 31st, she has been running a program that downloads all the images that Grok replies to for an hour every day. Once downloaded, Oh analyzes the image using a program designed to detect various forms of nudity and undress. Oh provided NBC News with a video introducing her work and a spreadsheet documenting Grok’s posts that were analyzed.
The image caused concern to many onlookers, observers, and those whose photos had been manipulated, and there was continued backlash against X leading up to the changes.
Regulators and lawmakers began to put pressure on X.
On Thursday, British Prime Minister Keir Starmer harshly criticized X on Greatest Hits Radio, a radio network broadcast on 18 UK stations.
“This is shameful. It’s disgusting. And it should not be tolerated,” he said. “X has to figure this out.”
Mr Starmer said media regulator Ofcom had “full support to take action” and that “all options” were on the table.
Britain’s communications regulator Ofcom said on Monday it had put X and xAI in “urgent contact” to assess compliance with legal obligations to protect users, and would conduct a rapid assessment based on the companies’ responses. Irish regulators, Indian regulators and the European Commission are also seeking information on Grok-related safety issues.
However, U.S. agencies have been slow to take action that would impact Mr. Musk and Mr.
A Justice Department spokesperson told NBC News that the agency “takes AI-generated child sexual abuse material extremely seriously and will aggressively prosecute creators and possessors of CSAM.”
However, a spokesperson indicated that the department is more likely to prosecute individuals who request CSAM, rather than those who develop and own the bots that create it.
“We continue to explore ways to optimize enforcement in this area to protect children and abuse technology to hold the most vulnerable accountable,” the spokesperson said.
Some U.S. lawmakers had begun pushing for X to more aggressively police the images, citing Act A, signed into law by President Trump in 2025 and touted by first lady Melania Trump, the Take It Down Act, which aims to criminalize individuals for publishing non-consensual AI-generated pornographic images with the threat of fines and prison time, and the threat of Federal Trade Commission enforcement against platforms that don’t take action. It includes a provision that allows victims of non-consensual and suggestive images to ask social media sites to remove them, but sites are not required to put such systems in place until May 19, a year after the law was signed.
“This is exactly the abuse that the Take It Down Act was written to prevent. This law is clear: It is illegal to create these images, to share them, and to keep them up on our platforms,” Rep. Maria Salazar, R-Florida, said in a statement.
“While platforms still have several months to fully comply with the TAKE IT DOWN law, X should act immediately and remove all of this content,” she said.
“These illegal images pose a grave threat to the privacy and dignity of their victims. They must be removed and guardrails installed,” Sen. Ted Cruz (R-Texas) wrote on X.
“This incident is a good reminder that as AI advances, we will face privacy and security challenges, and we need to proactively address those threats,” he said.
Sen. Ron Wyden (D-Ore.), a co-author of Section 230 of the Communications Decency Act, said in a statement that the law largely exempts social media platforms from legal liability for content posted by their users, provided they adhere to certain moderations, and said the law was not intended to protect companies from the output of their chatbots.
“If President Trump’s Justice Department won’t respond, then the states need to step in and hold Mr. X and Mr. Musk accountable,” he said.
A number of state attorney general’s offices, including Massachusetts, Missouri, Nebraska and New York, told NBC News they were aware of and were monitoring Groch, but did not say they had opened a criminal investigation. A spokesperson for Florida Attorney General James Usmeyer said his office is “currently consulting with X to ensure child protection is in place and prevent its platform from being used to generate CSAM.”
Some began to doubt whether private stakeholders and X organizers could take action.
App stores such as Google Play Store and Apple App Store, which host X and xAI apps, appear to prohibit sexual or non-consensual images of children in their terms of service. However, the apps remain live on those stores, and their spokespeople did not respond to requests for comment.
