{"id":428,"date":"2026-01-15T06:25:43","date_gmt":"2026-01-15T06:25:43","guid":{"rendered":"https:\/\/diyhaven858.wasmer.app\/index.php\/musk-denies-awareness-of-grok-sexual-underage-images-as-california-ag-launches-probe\/"},"modified":"2026-01-15T06:25:43","modified_gmt":"2026-01-15T06:25:43","slug":"musk-denies-awareness-of-grok-sexual-underage-images-as-california-ag-launches-probe","status":"publish","type":"post","link":"https:\/\/diyhaven858.wasmer.app\/index.php\/musk-denies-awareness-of-grok-sexual-underage-images-as-california-ag-launches-probe\/","title":{"rendered":"Musk denies awareness of Grok sexual underage images as California AG launches probe"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div xmlns:default=\"http:\/\/www.w3.org\/2000\/svg\">\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">Elon Musk said Wednesday he is \u201cnot aware of any naked underage images generated by Grok,\u201d hours before the California attorney general opened an investigation into xAI\u2019s chatbot over the \u201cproliferation of nonconsensual sexually explicit material.\u201d\u00a0<\/p>\n<p class=\"wp-block-paragraph\">Musk\u2019s denial comes as pressure mounts from governments worldwide \u2014 from the U.K. and Europe to Malaysia and Indonesia \u2014 after users on X began asking Grok to turn photos of real women, and in some cases children, into sexualized images without their consent. Copyleaks, an AI detection and content governance platform, estimated roughly one image was posted each minute on X. A separate sample gathered from January 5 to January 6 found 6,700 per hour over the 24-hour period. (X and xAI are part of the same company.)\u00a0<\/p>\n<p class=\"wp-block-paragraph\">\u201cThis material\u2026has been used to harass people across the internet,\u201d said California Attorney General Rob Bonta in a statement. \u201cI urge xAI to take immediate action to ensure this goes no further.\u201d<\/p>\n<p class=\"wp-block-paragraph\">The AG\u2019s office will investigate whether and how xAI violated the law.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">Several laws exist to protect targets of nonconsensual sexual imagery and child sexual abuse material (CSAM). Last year the Take It Down Act was signed into a federal law, which criminalizes knowingly distributing nonconsensual intimate images \u2014 including deepfakes \u2014 and requires platforms like X to remove such content within 48 hours. California also has its own series of laws that Gov. Gavin Newsom signed in 2024 to crack down on sexually explicit deepfakes.<\/p>\n<p class=\"wp-block-paragraph\">Grok began fulfilling user requests on X to produce sexualized photos of women and children toward the end of the year. The trend appears to have taken off after certain adult-content creators prompted Grok to generate sexualized imagery of themselves as a form of marketing, which then led to other users issuing similar prompts. In a number of public cases, including well-known figures like \u201cStranger Things\u201d actress Millie Bobby Brown, Grok responded to prompts asking it to alter real photos of real women by changing clothing, body positioning, or physical features in overtly sexual ways.<\/p>\n<p class=\"wp-block-paragraph\">According to some reports, xAI has begun implementing safeguards to address the issue. Grok now requires a premium subscription before responding to certain image-generation requests, and even then the image may not be generated. April Kozen, VP of marketing at Copyleaks, told TechCrunch that Grok may fulfill a request in a more generic or toned-down way. They added that Grok appears more permissive with adult content creators.\u00a0<\/p>\n<div class=\"wp-block-techcrunch-inline-cta\">\n<div class=\"inline-cta__wrapper\">\n<p>Techcrunch event<\/p>\n<div class=\"inline-cta__content\">\n<p>\n\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__location\">San Francisco<\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__separator\">|<\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__date\">October 13-15, 2026<\/span>\n\t\t\t\t\t\t\t<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\n<p class=\"wp-block-paragraph\">\u201cOverall, these behaviors suggest X is experimenting with multiple mechanisms to reduce or control problematic image generation, though inconsistencies remain,\u201d Kozen said.<\/p>\n<p class=\"wp-block-paragraph\">Neither xAI nor Musk has publicly addressed the problem head on. A few days after the instances began, Musk appeared to make light of the issue by asking Grok to generate an image of himself in a bikini. On January 3, X\u2019s safety account said the company takes \u201caction against illegal content on X, including [CSAM],\u201d without specifically addressing Grok\u2019s apparent lack of safeguards or the creation of sexualized manipulated imagery involving women.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">The positioning mirrors what Musk posted today, emphasizing illegality and user behavior.<\/p>\n<p class=\"wp-block-paragraph\">Musk wrote he was \u201cnot aware of any naked underage images generated by Grok. Literally zero.\u201d That statement doesn\u2019t deny the existence of bikini pics or sexualized edits more broadly.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">Michael Goodyear, an associate professor at New York Law School and former litigator, told TechCrunch that Musk likely narrowly focused on CSAM because the penalties for creating or distributing synthetic sexualized imagery of children are greater.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">\u201cFor example, in the United States, the distributor or threatened distributor of CSAM can face up to three years imprisonment under the Take It Down Act, compared to two for nonconsensual adult sexual imagery,\u201d Goodyear said.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">He added that the \u201cbigger point\u201d is Musk\u2019s attempt to draw attention to problematic user content.<\/p>\n<p class=\"wp-block-paragraph\">\u201cObviously, Grok does not spontaneously generate images. It does so only according to user request,\u201d Musk wrote in his post. \u201cWhen asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Taken together, the post characterizes these incidents as uncommon, attributes them to user requests or adversarial prompting, and presents them as technical issues that can be solved through fixes. It stops short of acknowledging any shortcomings in Grok\u2019s underlying safety design.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">\u201cRegulators may consider, with attention to free speech protections, requiring proactive measures by AI developers to prevent such content,\u201d Goodyear said.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">TechCrunch has reached out to xAI to ask how many times it caught instances of nonconsensual sexually manipulated images of women and children, what guardrails specifically changed, and whether the company notified regulators of the issue.\u00a0TechCrunch will update the article if the company responds. <\/p>\n<p class=\"wp-block-paragraph\">The California AG isn\u2019t the only regulator to try to hold xAI accountable for the issue. Indonesia and Malaysia have both temporarily blocked access to Grok; India has demanded that X make immediate technical and procedural changes to Grok; the European Commission ordered xAI to retain all documents related to its Grok chatbot, a precursor to opening a new investigation; and the U.K.\u2019s online safety watchdog Ofcom opened a formal investigation under the U.K.\u2019s Online Safety Act.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">xAI has come under fire for Grok\u2019s sexualized imagery before. As AG Bonta pointed out in a statement, Grok includes a \u201cspicy mode\u201d to generate explicit content. In October, an update made it even easier to jailbreak what little safety guidelines there were, resulting in many users creating hardcore pornography with Grok, as well as graphic and violent sexual images.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">Many of the more pornographic images that Grok has produced have been of AI-generated people \u2014 something that many might still find ethically dubious but perhaps less harmful to the individuals in the images and videos. <\/p>\n<p class=\"wp-block-paragraph\">\u201cWhen AI systems allow the manipulation of real people\u2019s images without clear consent, the impact can be immediate and deeply personal,\u201d Copyleaks co-founder and CEO Alon Yamin said in a statement emailed to TechCrunch. \u201cFrom Sora to Grok, we are seeing a rapid rise in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent misuse.\u201d<\/p>\n<\/div>\n<p><br \/>\n<br \/><a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Elon Musk said Wednesday he is \u201cnot aware of any naked underage images generated by Grok,\u201d hours before the California attorney general opened an investigation into xAI\u2019s chatbot over the \u201cproliferation of nonconsensual sexually explicit material.\u201d\u00a0 Musk\u2019s denial comes as pressure mounts from governments worldwide \u2014 from the U.K. and Europe to Malaysia and Indonesia [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":429,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_daextam_enable_autolinks":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11],"tags":[],"class_list":["post-428","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-news"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/diyhaven858.wasmer.app\/wp-content\/uploads\/2026\/01\/grok-nonconsensual-sexual-images-x.jpg","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/428","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/comments?post=428"}],"version-history":[{"count":0,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/428\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media\/429"}],"wp:attachment":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media?parent=428"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/categories?post=428"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/tags?post=428"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}