Hate Speech, Incorporated: How Social Media Turned Bigotry Into a Business Model

By 2025, it is no longer controversial to say that social media has become a sewer. What’s controversial is admitting that it was designed this way. The polite version is to call it “polarization,” “contentious discourse,” “a marketplace of ideas with some bad actors.” The honest version is that platforms like Facebook and X have fully surrendered to overt bigotry—not just by accident, not just by negligence, but by design.

Researchers have been tracking the evidence with clinical precision. Slur-detection models confirm a marked rise in homophobic, transphobic, and racist language on X. Transphobic speech saw the biggest growth following Elon Musk’s purchase of the site, as if the platform itself was waiting for a paternal nod to say the quiet part loud. Bots and coordinated accounts still run rampant, amplifying toxic content and disinformation with machine efficiency, ensuring that if you miss one hateful post, twenty more will find you by dinner. And on Facebook, Meta’s decision to pivot away from in-house fact-checking toward a “community notes” model—an idea that works about as well as letting neighborhood watch groups handle arson—has opened the gates wide for slurs and gendered targeting to be rebranded as “just discourse.”

This isn’t a leak in the dam. It’s the dam itself being torn down for parts, and the flood labeled “user engagement.”


The surge in hate speech is not anecdotal anymore; it is empirical. Berkeley researchers confirmed what every queer person scrolling through X already knew: slurs are no longer coded, no longer whispered, no longer couched in plausible deniability. They are the feed. The data shows persistent spikes in usage across categories, but transphobia has taken the crown, overtaking racism and misogyny in the sheer velocity of growth. That should tell you something about where we are as a society: it is not enough to recycle the old hatreds, we must innovate new frontiers in cruelty.

The Science Media Centre study nailed the causation: reduced moderation equals emboldened hate. When Musk fired moderation teams, reinstated banned accounts, and declared himself a “free speech absolutist,” what he really did was signal to the worst actors that the playground was theirs again. It turns out “free speech absolutism” translates, in practice, to absolute indulgence for bullies. The trolls weren’t silenced; they were simply waiting for the whistle to blow.


And the whistle blew loudly. Bots and coordinated accounts—which never really disappeared—flourished under the new regime. A USC study mapped their role in amplifying slurs and toxic content, showing how automated networks inflate the visibility of hate until it becomes impossible to distinguish fringe vitriol from mainstream discourse. The irony is delicious in its cruelty: the very platforms that brag about their ability to detect patterns for targeted advertising plead helplessness in detecting coordinated hate campaigns. They can find out you’re pregnant before you tell your spouse, but they can’t tell when five thousand accounts are posting the same slur in unison.


On Facebook, the rot smells different but is just as potent. Meta’s pivot to “community notes” was framed as democratic empowerment. Who needs corporate fact-checkers when you can crowdsource the truth? The flaw, obvious to anyone who has ever read a comment section, is that prejudice outnumbers integrity. Community notes work when your community is honest. They collapse when your community is invested in misinformation, bigotry, or both. And so the platform that once at least pretended to care about “safety” now plays host to open targeting of LGBTQ+ users, with posts flagged not as hate but as “opinions.”

This is not moderation. It is abdication.


The defenders will argue that hate speech is just words, that it is better out in the open, that sunlight disinfects. But sunlight doesn’t disinfect when the hosts are incentivizing mold. Words shape worlds, and what’s happening now is the normalization of bigotry as a central part of digital culture. What was once hidden in comment threads or burner accounts is now front-facing, proudly displayed, algorithmically boosted.

And because politicians signal their approval—whether through silence, dog whistles, or outright cheerleading—extremists feel emboldened to mock, harass, and target with impunity. This is not the Wild West of the internet. This is a cultivated plantation of hate, fertilized by profit margins and shielded by rhetoric about “open platforms.”


Let’s not pretend this is just a cultural issue. It is also structural economics. Hate is sticky. Hate is viral. Hate keeps people logged in. You will not scroll for hours through polite disagreement about infrastructure spending. You will scroll for hours through fights about whether your existence is valid. Engagement, the currency of platforms, is maximized when people are furious, offended, or wounded. Which means bigotry is not an unfortunate byproduct—it is a feature.

Every transphobic slur on X is not just a word. It is a data point. It is an ad impression. It is money. Every racist meme shared on Facebook is not just a joke. It is a metric of engagement. It is a reason to keep you scrolling through three more reels. The hate is not free speech. The hate is product.


The absurdity is that the platforms still pretend they are neutral. Musk postures as though he is merely liberating the discourse. Meta insists it is empowering users to decide what is true. Both claim to be hands-off, when in reality they are curating via algorithms that decide which voices are amplified and which are buried. There is nothing neutral about a feed that pushes hate to the top because it drives comments. There is nothing democratic about a system where “community notes” can be brigaded by coordinated campaigns. Neutrality is the mask. Profit is the face.


The satire of this moment is that bigotry has been rebranded as authenticity. The person calling you a slur is just “telling it like it is.” The bot campaign smearing trans people is just “raising questions.” The coordinated network spreading racial conspiracy theories is just “skeptical of mainstream narratives.” And any attempt to intervene is framed as “censorship.” The platforms have inverted reality: cruelty becomes honesty, harassment becomes debate, silence becomes tyranny.

The result is a digital commons where marginalized groups are not just excluded but actively targeted. Being queer, trans, or non-white online now comes with a baseline expectation of abuse. It is not an exception; it is the environment. And because the environment is profit-generating, no one with power has an incentive to change it.


The broader political climate only accelerates the trend. Leaders who once muttered about civility now openly use slurs onstage, retweet harassment, frame prejudice as policy. The message filters down: if it’s okay for senators, it’s okay for trolls. If it’s okay for billionaires, it’s okay for bots. The normalization is complete. The only ones told to adjust are the targets, who are advised to toughen up, log off, stop being so sensitive.


Some will argue this is simply the next stage of the internet. But it is also the next stage of governance. When hate becomes normalized online, it reshapes the terrain offline. It shifts what’s considered acceptable in legislatures, schools, workplaces. Anti-trans bills ride the wave of digital slurs. Racist conspiracy theories fuel voting restrictions. Misogynistic harassment online translates into policies designed to punish reproductive autonomy. The line between platform and polity has dissolved.


The haunting part is how ordinary it all feels now. To log on and see slurs trending is not shocking; it is expected. To watch Meta justify prejudice as “user-driven” is not surprising; it is policy. To see Musk tweet a meme echoing his own users’ hate speech is not scandal; it is branding. The platforms are not collapsing under the weight of bigotry. They are thriving because of it.

That’s the final, bitter irony. In 2025, hate speech isn’t a glitch in the system. It is the system. The researchers will keep documenting the rise. The journalists will keep writing their exposés. The survivors will keep telling their stories. And the platforms will keep counting engagement metrics, cashing ad checks, and nodding solemnly while pretending they are just the mirror, never the sculptor.

The most haunting truth is this: we are no longer documenting hate. We are documenting its monetization. Social media didn’t just open the gates—it sold tickets to the show.