Across every major social platform, a quiet but profound transformation is happening. Users no longer simply communicate in natural language — they adapt, distort, and modify their words to survive within opaque algorithmic boundaries. This strange, coded lexicon is known as algospeak, a phenomenon born not from formal rules or public policies, but from user beliefs, platform ambiguity, and the growing power of algorithm-driven content moderation.
Although Big Tech companies insist that they don’t maintain “forbidden word lists,” millions of creators feel otherwise. Their daily experiences — removed posts, sudden drop in reach, demonetisation, or unclear guideline violations — foster a global perception that certain words are dangerous for visibility. The result is a new socio-digital behaviour: self-censorship shaped not by law or culture, but by the belief that algorithms are silently watching.

This article explores the deeper story behind these user perceptions, documented experiences, platform interventions, and the evolving tension between speech, safety, monetisation, and algorithmic opacity. It rewrites and expands the original reporting into a comprehensive, tech-industry-focused analysis.
The Myth, the Mystery, and the Machine: Why People Believe Algorithms Ban Words
On platforms like TikTok, Instagram and YouTube, everyday language is increasingly replaced with bizarre alternatives:
- “unalived” instead of “killed”
- “seggs” instead of “sex”
- “SA” for “sexual assault”
- “pew pews” for guns
Users recognise the absurdity, yet they participate because they assume using real words leads to suppression.
All major companies — TikTok, Meta, YouTube — categorically deny maintaining a “banned words” list. They emphasise context, nuance, and multifactor decision-making in their moderation systems. Yet the gap between official statements and user behaviour keeps widening.
This gap emerges from three forces:
- Historical patterns of opaque suppression
- A lack of clear, transparent explanations for moderation actions
- Human tendency to form “folk theories” in uncertain environments
The result is a digital culture driven by algorithmic imaginaries — beliefs about how algorithms operate, which may or may not reflect reality, but nevertheless shape online discourse profoundly.
When Platforms Say No List Exists — Users Say Otherwise
The conflict between user experience and corporate transparency deepens every year. YouTube’s spokesperson repeats the familiar stance:
“We do not ban or restrict specific words. Context determines moderation.”
TikTok and Meta echo the same position. However, creator experiences tell a different story.
Case Study: The Comedian Who Stopped Saying ‘YouTube’ on TikTok
Content creator Alex Pearlman, with millions of followers, describes a pattern that he cannot ignore: simply mentioning the word “YouTube” — particularly in phrases like “go to my YouTube channel” — consistently kills his reach on TikTok.
TikTok denies suppressing mention of competitors, but Pearlman and thousands of creators feel otherwise. Analytics reinforce their suspicions.
This belief creates a self-reinforcing loop:
- Creators avoid the word
- Less content uses the word
- Any video that does use it might naturally underperform due to randomness
- Result: perceived proof of censorship
This is how algospeak begins.
When Sensitive Topics Become a Minefield: The Epstein Example
Pearlman’s experience with videos about Jeffrey Epstein intensifies his distrust. In August last year — a moment when Epstein-related content was spiking across platforms — TikTok suddenly removed multiple videos from Pearlman’s channel.
Instagram and YouTube left them untouched.
TikTok did not explain which sentence or phrase violated guidelines. As many creators complain:
- Enforcement feels unpredictable
- Appeals are often denied without explanation
- Strikes directly harm income potential
Under such uncertainty, Pearlman shifted to coded references like “the Island Man.”
But this comes at a cost: clarity.
This microcosm reflects a macro problem — uncertainty weaponises self-censorship.
The Historical Track Record: When Tech Platforms Quietly Shape Visibility
Although companies claim neutrality, the tech industry is filled with documented cases of opaque or biased suppression.
Examples include:
1. Facebook & Instagram limiting Palestinian content during the Gaza conflict
Investigations by BBC and Human Rights Watch found widespread restrictions on Palestinian voices. Meta framed this as “mistakes,” but creators saw patterns too consistent to ignore.
2. TikTok’s leaked 2019 moderation guidelines
Documents revealed suppression of content from:
- disabled users
- LGBTQ+ users
- poor or “ugly” users
- politically critical livestreams
TikTok called these outdated anti-bullying measures. Still, the trust damage was permanent.
3. TikTok’s secret “heating” button
The company admitted to manually boosting chosen videos — meaning a “cooling” mechanism is likely possible, as creators like Pearlman argue.
4. YouTube’s LGBTQ+ demonetisation controversy
Creators sued in 2019, claiming videos with words like “gay” or “trans” were demonetised. The lawsuit was dismissed, but distrust persists.
In short: platform denial often conflicts with platform behaviour.
A Protest That Turned Into a Music Festival — And The Algorithms Didn’t Even Ask For It
One of the strangest cases of algospeak emerged during mass ICE-related protests in August 2025.
Creators felt platforms were suppressing protest videos. So, they invented a codeword:
“Music festival.”
Videos flooded TikTok:
“We’re at the Los Angeles music festival!”
Creators filmed huge protest crowds — but pretended they were concertgoers.
Yet, as linguist Adam Aleksic documented, there was no evidence platforms were actively suppressing protest content. Instead, a mass user belief created its own algorithmic reality:
- Using “music festival” became a cultural signal
- Engagement skyrocketed out of curiosity
- Higher engagement made these videos more visible
- This convinced everyone that suppression was real
This is the perfect illustration of algorithmic imaginary — the idea that user beliefs about algorithms shape behaviour, which shapes algorithms, which reinforces beliefs.
A loop built entirely from perception — not policy.
The Academic View: When Rules Are Opaque, Folk Theories Flourish
Sarah T. Roberts, UCLA professor and internet governance expert, explains that the opacity of platform moderation is the root of the issue.
She argues:
- Rules are vague
- Enforcement is inconsistent
- Explanations are insufficient or absent
Thus, users invent their own theories. Over time, these theories become social norms.
According to Roberts:
“All these odd behaviours only make sense inside systems that make little sense to ordinary users.”
As a result:
Not only does algospeak flourish — it becomes inevitable.
Creators Are Left Guessing — And Guessing Becomes Strategy
Ariana Jasmine Afshar, a prominent political creator, embodies this ambiguity. She frequently posts about social issues and protests:
- Sometimes her posts explode
- Sometimes they sink without reason
She uses algospeak, but admits she has no idea whether it helps.
Meta once contacted her to congratulate her success and offer growth strategies — further evidence that the same platform allegedly “suppressing” her content also encourages her production.
To creators, this inconsistency feels like manipulation. To platforms, it’s algorithmic complexity.
The Business Model Behind the Moderation Machine
The most important insight comes from Roberts:
Algorithmic moderation is not about politics — it’s about profit.
Social media companies earn money primarily from advertising.
Advertisers want:
- Safe environments
- Non-controversial content
- Predictable user engagement
- Minimal regulatory pressure
Therefore, platforms optimise for:
- Avoiding outrage cycles
- Avoiding political news clusters
- Minimising real-world harm
- Keeping teens “safe”
- Keeping government regulators satisfied
This leads to:
- vague community guidelines
- opaque recommendation systems
- selective suppression
- algorithmic nudges toward “brand safety”
And users respond by shaping their speech accordingly.
Is Algospeak the Future of Online Language?
The evolution of digital speech mirrors earlier cultural shifts — but now accelerated by algorithmic influence.
Will future generations speak in sanitized code by default?
Will online language fracture into platform-specific dialects?
Will algorithms ever achieve true contextual understanding to eliminate ambiguity?
The story of algospeak is not merely linguistic. It is a story about power, transparency, and the future of public discourse.
Platforms insist their systems are fair. Users believe otherwise.
Between the two lies a new, hybrid language created by fear, adaptation, and digital survival instincts.
In a world where online algorithms increasingly mediate knowledge, culture, and political expression, algospeak might be the first global dialect created not by people — but by machines.