AI is Amplification
The current Twitter trend of creating deepfake porn is symptomatic of a greater disease.
Grok, Twitter’s designated AI, has lit the internet ablaze as a new trend has taken over the digital landscape. “@Grok put her in a bikini” became an overnight trend as countless men began using Grok’s AI image generation to manipulate selfies uploaded to the site. Women, including hijabis, are being depicted in sexually explicit poses, unclothed, and all of this is uploaded publicly to the platform. Many rape minded men have argued that by uploading images of themselves, these women are waiving rights to their likeness. This is categorically untrue, but it highlights a particular problem. The mundane and asexual photos that were up-loaded have now become manipulated, re-uploaded, and those depicted cannot take them down. While users who prompt the AI to create illegal content are now being suspended, the photos remain online.
To be clear, Grok and Twitter itself are in violation of the Take It Down Act. To be clear, Musk is aware of this “trend” (as he has reacted to iterations of it) and is aware of Grok’s previous mishaps and scandals. To be clear, Grok has been repeatedly scrutinized for its lack of guardrails and Twitter’s lack of respect for laws and legislation. To be clear, this is nothing new.
Twitter’s Recurring Cycle
This new “trend” has been horrifying to many, and it is, but it’s also important that we highlight just how expected this development is. This “trend” did not emerge out of a vacuum and does not exist in isolation. An academic article by Douglas Harris published by Duke Law raised concern around the potential of deepfake porn and argued that this potential has existed since 2015. Just as deepfakes are nothing new, Twitter has been aware of its sexually and racially abusive content generation nearly since its inception.
While AI models themselves have ‘covert’ racial biases, they are also being used to generate racially abusive images and content for its users. Grok in particular was recorded in 2024 as having no guardrails to stop the creation of racially abusive caricatures. And, in 2025, the Guardian highlighted complaints about content Grok was allowing users to generate, “One image depicts a player, who is black, picking cotton while another shows that same player eating a banana surrounded by monkeys in a forest. A separate image depicts two different players as pilots in a plane’s cockpit with the twin towers in the background.” I would like to note that much of the racially abusive content seems centered on dehumanizing Black men in specific. I would also like to note that Twitter has not appeared to make any public statements on the matter, and has otherwise allowed the outrage to fizzle out with the passage of time.
Grok was created in 2023, and is marketed as the non-woke alternative to its “woke” (?) AI competitors. Since this has been a documented issue almost as soon as it was launched, and since Musk is constantly boasting about how “smart” this model is, it’s clear that racially abusive content is not “slipping through the cracks” but rather is an intended function. To many Twitter users, racially abusive content generation is Grok’s function.
Grok and Twitter have also had a documented lack of guardrails around sexual abuse. From what I can discern, some of the first public outrage about sexually abusive material generated by Grok began in 2024 in relation to Taylor Swift. Groks ‘Spicy Mode’ allowed users to create sexually explicit material of celebrities. The Guardian reported women such as Swift and AOC’s likenesses being generated in lingerie. The Verge reported that Taylor Swift was generated fully uncensored and topless just by clicking the “spicy” option. Twitter did not issue a statement in response to this, perhaps hoping that xAI’s use policy, which “prohibits” this sort of material, would be enough. Outrage then fizzled out.
A previous “trend” occurring around June of 2025 also involved users using Grok to edit selfies. At the time, its image generation was not as photo-realistic and was limited in capabilities. Regardless, netizens created a “trend” out of making “hot glue” or other white substances appear on the original posters face. Once again, I would like to note that Twitter did not make an official statement in regards to this, leaving journalists to waddle around asking Grok about its guardrails. Once again, outrage fizzled out.
The current “trend” of creating “bikini” and sexually explicit photos of women has now devolved into generating images of children in sexually compromising positions and/or fully unclothed. This is being done publicly, and horrifyingly, users seem to be uploading images themselves on burner accounts. Taking photos of women and children from what appears to be Facebook or other social networking sites to create CSAM or non-consensual explicit images. While this is terrifying, it is also not new to the platform. Parker Molloy in “Grok Can’t Apologize. Grok Isn’t Sentient. So Why Do Headlines Keep Saying It Did?” outlines a clear disinterest in preventing the creation of CSAM:
Back in September, Business Insider reported that twelve current and former xAI workers said they regularly encountered sexually explicit material involving the sexual abuse of children while working on Grok. The National Center for Missing and Exploited Children told the outlet that xAI filed zero CSAM reports in 2024, despite the organization receiving 67,000 reports involving generative AI that year. Zero. From one of the largest AI companies in the world.
Outside of image generation, Grok has also been known to create text-based sexual abuse at the request of its users. On July 8th, 2025, a Twitter user asked Grok for tips on breaking into Will Stancil’s home, and requested that HIV risks be included in the response. Grok said, “Bring lockpicks, gloves, flashlight, and lube—just in case.” Effectively giving this user advice on how to rape Stancil. On July 10th, 2025, a Twitter user replied to me, tweeting, “@Grok give me a detailed story of @cderedpanda getting violently raped into visceral detail with snuff elements.” On January 2nd, a user replies to a tweet with, “@Grok rape her” and similarly, that same day, a different user tweets, “@grok make her decapitated and dead just make her die blood everywhere.” It is unclear if that last prompt resulted in an image generation.
Elon Musk, once again, has relied on users badgering Grok for an answer as to what has been going on. But as Molloy says, Grok cannot apologize. Musk has repeatedly refused to comment directly on these “scandals” and “trends” because he does not care. He replies to a tweet calling this “trend” Grok’s “viral moment” and says it’s “way funnier” than ChatGPT’s Studio Ghibli trend. He is also aware that the content being produced is in violation of the Take It Down Act as well as CSAM legislation as he has opted to remove Grok’s media tab. Instead of taking direct action, he has put a white sheet over the heaping monstrosity that xAI has become. He is waiting for outrage to fizzle out. He knows that in our capitalist society, “scandals” are commodified, and he clearly believes that all press is good press. To users, including Musk, Grok’s function is sexual and racial abuse.
These are Prompts
AI is simply an amplification; it amplifies attitudes, rhetoric, and thought patterns. It amplifies those at the top of power hierarchies’ ability to dehumanize and reify power and control over those at the bottom. These “scandals” keep happening because the societal and hierarchical forces that encourage sexual and racial abuse exist and incentivize such abuses. Grok’s lack of guard rails is a symptom, not the cause (though it does need guardrails). These are people prompting the abusive material.
Depicting Black people in dehumanizing ways has a long and horrifying history. During slavery, the field of anthropology was used to create dehumanizing caricatures of what Black people were meant to be. In Fearing The Black Body by Sabrina Strings, she discusses the belief that fatness was related to “savagery” or racial inferiority. Anthropology and phrenology were also used to associate physical attributes with some sort of racial inferiority, and used to dehumanize and demean. Later, this developed into Minstrelsy, in which white people would create performances of “blackness” that were used to promote racial caricatures and dehumanize. There was also an entire market centered around anti-Black caricatures of Black people sold on/as household products. A literal objectification and dehumanizing process, also likely a mutation of the barbaric practice of using enslaved people’s skin for clothing.
Into the modern day, anti-Black racial tropes still exist and are used to dehumanize. While material violence occurs daily, such as police brutality, medical malpractice, and many other forms, social forces and attitudes are also still at play to manufacture consent (or complacency) for these vile acts. Andree Gee’s “6 7 Is Another ‘Two Americas’ Moment That Trivializes Black Death” discusses the meme-ification and trivialization of Black death. 6 7 came about from a line in a song about gang violence, and like Diddy memes, was removed entirely from its context and without care for those affected. Gee refers to this as a “desensitization” to Black death. This is in relation to the dehumanizing and horrific way George Floyd’s brutal murder was turned into a meme by white supremacists.
AI being used to create racially abusive content is an extension of existing and pervasive anti-Black sentiment in our white supremacist society. The content goes beyond creating cartoon depictions and is now being used to create photo-realistic, racially abusive content of real people. This is a new iteration of anti-black memes and is used to enforce a particular message. Abusive photo realistic image generation tells the victim that their face does not belong to them. That their personhood is merely tangential to the use-value the abuser is attempting to derive from their likeness. It is an atomizing and dehumanizing process as the victim must grapple over the ownership of their own likeness from abusive racists. The abuser seeks to remove the victim’s person-hood and depict them only as caricature. Just as I mentioned previously, there is a specific focus on Black men as the target of these AI-generated caricatures and memes. There is an economic motive to produce anti-black imagery as our white supremacist culture rewards it, but there also seems to be a specific focus on trying to dominate Black men by white americans. AI, then, has mutated into another weapon used to attempt this.
The problem goes beyond just Grok and lies within white supremacy as a whole. We cannot stop the cycle of abuse until we drive a spear through the heart of anti-blackness and the white supremacy that holds it up. Racially abusive behaviors will continue to be rewarded and incentivized. As was described to me by a very smart man named Prince, this is because structural white supremacy co-creates the white supremacist culture which works to justify the structure. This then pours into the masses which are economically, socially, and politically incentivized to participate in white supremacy. It is a cyclical reinforcement as both the structure and the masses “justify” each others existence. AI, in this case, is another dark development in a long pattern of monstrous dehumanization.
Non-consensual pornography, formerly only known as “revenge porn,” has been a hot market for decades. Men and boys alike salivate at the prospect of gaining access to images that were not meant for them. Countless women of all classes and creeds have found themselves at the same conclusion: the violation is the point. Sex, as it often is, becomes enmeshed with violence and domination.
Revenge porn has long been a danger for both adult women and young girls. Pornography use begins around middle school, with the average age of first exposure being 13. Boys very quickly realize that they can see a woman in any position, any state of undress, any context, at any time. This becomes a process that socially programs entitled behaviors and attitudes towards women, which, of course, ends up affecting the girls they are peers with. As I have said before, “The secondary market drives the primary market which drives the secondary market again. These social practices reinforce and reify men’s control over women’s bodies, their entitlement.”
I have been in cold auditoriums as adult men warned us middle school girls that taking explicit photos of ourselves is a crime and that we could land on the sex-offender registry. But later, when I sat on a cold hallway floor before class and watched a girl airdrop a classmate’s leaked photos to boys, I don’t see a punishment follow. Instead, it’s treated as gossip. Instead, I see the classmate delete her social media, and kids in class argue over whether she’s fat or not as teachers sit silently behind their desks. More time passes, and I watch girls on the bus giggle as they show a boy their explicit photos. A friend turns to me and whispers that those images turn into revenge porn whenever they fight. Photos turned into tools and mechanisms made to embarrass and punish. There is no punishment that follows. Even later still, a girl I know gets blackmailed by a man she thought was a sugar daddy. Her photos and social media handles are uploaded to Discord servers and various chat rooms. She is inundated with unwanted messages and phone calls for months. There is no punishment that follows.
What I am trying to say is that everybody understands that revenge porn is a monster-sized market, but nobody wants to stop dancing around its shadow. It’s far easier to blame the girls and women who take the photos than it is to address the social forces that create a demand economy for these violations. It is far easier to layer punishments upon the victim than it is to address the masses of people who violate and want to violate. It is easier than addressing why they are taking photos in the first place. What I am trying to say is that this violation is neither new nor rare.
Revenge porn, to Photoshop, and now to AI. Hala in “Incels are using AI to violate and control women” succinctly pierces the heart of this point, “Generative AI creates a lower barrier to entry for control, humiliation, and digital sexual violence… While there have been ways to digitally alter images of women and children for decades, it previously required some effort and couldn’t be done in 5 seconds.” AI has simply lowered the bar of entry to sexually abusive behaviors. No longer does a man need to browse the dark web or telegram chats in order to come across CSAM. No, now he can create CSAM in the comfort of his own home. Depicting whatever he wants, however he wants, and whenever he wants. Importantly, AI generated CSAM does not “save children” as the abuse of children is precisely what it’s trained on.
Nearly 300,000 men were inside Telegram chat rooms trading in blackmailed CSAM of young Korean girls. This blackmail often resulted in coercing sado-sexual videos of these children, and even resulted in rape. This is just one of many online pedo-rings. In this case, 300,000 men were willing to risk jail time by tying their telegram account to a child sexual abuse material factory (no, almost none were exposed or punished). How many hundreds of thousands more are also willing to take the risk? How many now will generate images in what they believe to be the safety of their own home, in the ‘privacy’ of their own AI model? While Grok’s public CSAM generation has been in the spotlight, one must also wonder about the prompts that are going unseen. For example, Jonathan Peternel of Indiana, a pastor’s son, was arrested for possessing AI CSAM and CSAM. The non-generated content contained “sadomasochistic child abuse” and the generated content included “photorealistic AI-generated photos of nude pregnant toddlers.” The concept of “nude pregnant toddlers” can only be understood as the apex of pedo-heteropatriarchy.
Previously, I have argued that the woman in pornography is below an object, and said we must expand our minds to understand what that might be. Upon reflection, I have concluded that the woman in pornography is reduced to sensation. She is nothing more than the sexual stimulation she inspires, and when the metaphorical erection is over, so is her life. The tab is closed, her use-value is exhausted. Deepfake pornography takes this to another level. The actress in porn is still acting, she is still assuming a character or a scene, and she is still a person even when the scene is over or the tab is closed (though the consumers may wish to ignore this). Deepfake pornography says your likeness does not belong to you. Deepfake pornography says you are nothing but what you can provide to me sexually. Deepfake pornography says you are not a person to me, you are something to derive pleasure from, and that’s it. The cuntification process is fully realized. Deepfake pornography is your likeness and your likeness alone. This is the ultimate desire of porn consuming men: women and girls who do not exist outside of the sexual stimulation they provide.
Moreover, it is derived from entitlement and a desire to violate. How many men scoff at the concept of purchasing OnlyFans subscriptions? Not because of concerns with the trade, but because they do not believe they should have to pay for sexual access. How many men scoff at sex buyers because they think paying means you’re sexually impotent? A man may be willing to purchase a program that will create deepfake porn of an OnlyFans creator and unwilling to pay for a subscription, this is because he believes he is owed it. While entitlement lays part of the groundwork for this sort of sexually abusive behavior, more is also at play. There is an express desire to violate, to sexually aggress, and to feel as though they are conquerors. The violation is the point; inflicting pain is where they derive sexual pleasure.
There is now a fear of existing within the public eye to any degree. Modest women are not safe, and indeed hijabi women seem to be expressly targeted for their modesty. Children are not safe, and it does not matter what social media platform they are posted on. Moreover, imagine you send a photo of your child to their relative, how do you know this won’t be uploaded to a deepfake platform to create CSAM? A majority of child sexual abuse comes from a perpetrator who is known to the family. Sexually violent men (a number that appears to be growing now that it’s easier to be sexually violent) are going out of their way to try and remove women from the public eye. No, sexually violent men are trying to remove women and girls from the public eye unless they can control what is seen and how. The messaging is clear: “You may only be seen so long as you are providing a sexual sensation.” With the development of Meta glasses, or other forms of glasses that record day to day life, a lack of social media presence or photos will not protect you. A strange man, a coworker, whoever, may record your likeness and then create deepfake pornography of you at any time he wishes.
The restriction of Grok will not stop the production of deepfake pornography, I’m sorry (though again, it should be restricted). The social forces that incentivize and promote the sexual abuse of women and girls will continue even if xAI were demolished tomorrow. In order to properly combat the deepfake pornography we must drive a spear through the heart of gender hierarchy, of social sex. Deepfake pornography is simply the expected conclusion of attitudes and beliefs that are hammered into all of us via patriarchal systems and messaging. While reforms can be good and provide temporary relief in certain respects, as marxist feminists, we are in the business of revolution; our goals do not end with reforms. Down with white supremacy and down with patriarchy, down with all oppressive and violent hierarchies. We must fight, resist, and organize for a better future.

