Three Terrible Criticisms of Algospeak
Please, stop repeating these talking points - they're nonsense.
Social media apps - quintessentially TikTok, but this applies to other apps as well - make money via ads. This means that if advertisers are discouraged from marketing through their platform, they lose a lot of money. Many advertisers are discouraged when they see platforms with lots of sensitive material like murder, suicide, rape, and so forth. So these apps have a strong financial incentive to discourage widespread discussion of sensitive material.
Doing so is really difficult, though. There are tens of thousands of videos being uploaded onto Tiktok every day, so it’s not really feasible to have humans review them. The best they can do is to use content-moderation algorithms to detect when content has keywords (like “murder”) that indicates it discusses sensitive material and penalize it. That could mean banning the creator’s account, demonetizing the video, appending scary looking content warnings to it, or blacklisting it so it never gets recommended to anyone and languishes in obscurity. So TikTokers who want to use the word “murder” (in any context; remember that the bot punishes you just for saying murder, even if you’re saying “my mom will murder me for coming back late”) have to find a way around the bot. An easy solution emerges: simply use a codeword that everyone knows means murder, but isn’t on the list of words that gets you canned by the algorithm.
Such codewords are usually phonetic garblings or simplistic euphemisms of the original word. “Murder” becomes “unalive”, “rape” becomes “grape”, “suicide” becomes “sewer slide”, “sex” becomes “seggs”, “Nazi” becomes “Yahtzee”, and “porn” becomes “corn”. A sentence like “my boyfriend watches Nazi porn and it’s ruining our sex” would become “my boyfriend watches Yahtzee corn and it’s ruining our seggs” on TikTok, which sounds like complete gibberish if you aren’t familiar with the codewords.
At first, these words were purely functional for dodging the automated bot. But TikTokers got so used to saying them that they started making it a part of their regular speech, and using words like “unalive” even in contexts where they don’t have to worry about being censored. Since TikTok is most popular amongst Zoomers, these words slowly became a part of Zoomer slang, colloquially called “algospeak” (because they dodge algorithms). I’m not 100% sure about this, but my sense is that algospeak tends to evolve faster than other kinds of slang because content-moderation algorithms update over time to filter out the original codewords. Thus “rape” has at times been called “grape” and “mascara”, and “pandemic” can be called “panini” or “panoramic”.
Algospeak has received heavy backlash in the internet community at large, above and beyond typical dislike for Zoomer slang. Call something “skibidi” and people roll their eyes or call you cringe, but say “unalive” and people will talk about how they’ve lost faith in humanity and that the kids aren’t alright. They seem to have a specific bone to pick with slang derived for the purpose of bypassing algorithms.
I don’t think the most common criticisms of algospeak are very good, so I’m going to try to refute them here. My goal is pretty modest - I’m not going to try to prove that algospeak is better than other forms of slang, or even that it’s good tout court. The only thing I’m trying to show is that you shouldn’t hate algospeak more than other forms of slang on the basis of these particular talking points.
“It’s Dystopian Censorship!”
In George Orwell’s 1984, the tyrannical government forces the hapless population (on pain of imprisonment, torture, and death) to use “Newspeak”, an artificially created variant of English.
Hey, wait a minute - isn’t that kind of what’s going on with algospeak? You have a censorious government with robotic enforcers (TikTok and its algorithm), forcing people to adopt a new language (algospeak) to avoid being arrested (banned) by the government. They even use “unalive” to mean dead, just like Newspeak uses “ungood” to mean bad! If we follow this analogy, that means people who use algospeak are giving into dystopian censorship, and letting the bad guys win. That means we should oppose algospeak, right?
Well, there’s one important difference between 1984 and TikTok1 that makes these situations disanalogous: the goverment in 1984 wants the population to use Newspeak. TikTok doesn’t. In fact, TikTok very much wants people not to use algospeak, so their content-moderation algorithms can properly catch anyone talking about murder or suicide. There is a sentiment that algospeak lets corporate censorship “win” by influencing the way we talk. But algospeak is not how corporate censorship wants us to talk; TikTok would strongly prefer its users stop talking about murder and suicide altogether, keeping it squeaky clean for advertisers.
The algorithm is the censorship. Algospeak is a sneaky way to bypass said censorship. Actually, it’s comparable to the way Chinese dissidents bypassed internet political censorship in their country. The government doesn’t want them to say something, so they come up with an entirely new language to bypass government scrutiny and it takes on a life of its own. It may seem odd or brainrotten for Zoomers to use “unalive” even in places where they won’t be censored for saying “murder”. Yet nobody would object if I used “Winnie the Pooh” to talk about Xi Jinping in a post about Chinese politics, even though the Chinese government can’t censor me.
It would be incredibly wrongheaded to chastise Chinese bloggers for saying “river crab” or “three watches” on the basis that it would let the CCP censors win. On the contrary, Chinese censors are extremely upset about this development and have made enthusiastic attempts to shut it down, just like TikTok constantly develops stricter algorithms to try to shut down algospeak. Far from seeing algospeak as a concession to corporate censorship, we should think of it as an act of defiance that subverts censorship.
“It Makes Being Serious Harder!”
Another criticism holds that we ought to discuss sensitive subjects like rape or suicide in a serious, sober light. If we use silly-sounding euphemisms like “grape” and “sewer slide”, their inherent cartoonishness will make it hard to take the subject seriously.
This criticism doesn’t make sense. The reason algospeak was invented is to allow people to talk about such subjects in the first place. Surely having to stifle a laugh at the silliness of “grape” now and again is better than people not being able to ever talk about sexual assault on one of the the biggest social media apps in the world. If these topics are so important to discuss, then it’s better we discuss them in a bit of a goofy way than to not discuss them at all.
But also, it’s just not true that silly slang makes it hard to talk seriously. In the mafia, “whacked” and “popped” are used as slang for murdering someone. These words are pretty silly and onomatopoeic. They sound like they belong in a Tom and Jerry cartoon or a cheesy 1980s superhero comic. Yet my impression is that mafia members don’t struggle to talk about murder seriously. Mob bosses aren’t bursting out in giggling fits during meetings about how to deal with snitches. If mafia members can say “whacked” without undermining the severity of the situation, you can do the same with “unalive”.2
Here are some other slang terms for death: brown bread, cark-it, eaten a twinkie, glue factory, hop the twig, kermit (referring to a suicidal death specifically), peg out, pop one’s clogs, pushing up daisies, the big Adios, toaster bath.
Is “unalive” really that much worse than all of these terms? Were 19th-century Englishmen unable to have serious conversations about death because of their silly slang? I don’t think so. I think “unalive” just feels stupider than the other slang terms because it’s relatively new, and a lot of “new slang is cringe” sentiment gets associated with it.
There’s a variation of this criticism that goes back to the 1984 analogy. The Party wants people to speak Newspeak because they want to control their thoughts, and their theory is that people think differently if they speak differently. Their aim is to make people forget ordinary English, and thus the concepts associated with ordinary English, so they can only think in the concepts the Party feeds them. This is called the Sapir-Whorf hypothesis, and the worry is that algospeak will make children and teenagers forget that the word “suicide” ever existed and only use “sewer slide” to describe the phenomenon, slowly changing their disposition towards suicide as a whole.
Now, the Sapir-Whorf hypothesis makes for a great story, but it’s not true in real life. Even if it was, what the Party is doing relies on complete control of communication and linguistics to force people to forget English. But TikTok doesn’t have complete control over global linguistics (and wouldn’t force people to use algospeak if it did - see the previous section). That means that teenagers won’t forget that the word “suicide” exists or what it means, because even if they only use “sewer slide” on social media, they’ll still see lots of other people use “suicide” in other contexts. For instance, they’ll read news articles about the causes of suicide, and hear politicians talk about suicide rates. Furthermore, they themselves will sometimes use it, no matter how used to “sewer slide they are”. I have yet to hear of a Zoomer writing about “grape culture” in an academic essay, or give a eulogy about the deceased’s tragic “sewer slide”.3 Ordinary English is in no danger of becoming weakened or forgotten.
“It’s Disrespectful!”
The last criticism I often see is that it’s disrespectful to people affected by the sensitive topics in question. Sexual assault survivors, for example, might feel offended by a video that describes it as “grape”. So we shouldn’t use algospeak, because it has a unique ability to offend people.
Now, I don’t tend to think much of objections along the lines of “terminally online short-form Zoomer slang videos on social media aren’t respectful enough”. I’m not saying that toxicity isn’t a problem, and I would certainly take a dim view of a video that mocked sexual assault survivors. But I’m not convinced that using “grape” instead of “rape” is a good example of that mockery. And I’m not convinced that the people who make these videos would be any more respectful if a genie magically prevented anyone from using algospeak ever again. In general, I think the content of a video matters more than its form when deciding how problematically disrespectful it is.
Even if this was true, a lot of TikTokers using algospeak are opening up about their own experiences being raped or harassed or facing discrimination.4 They’re the victim in this situation - surely their own comments, their own choice of words, can’t be offensive or disrespectful towards themselves? Even if this criticism had a leg to stand on, it wouldn’t be enough to show that algospeak as a whole was objectionable, only that it is objectionable in certain contexts.
And people who use algospeak do understand those contexts. Earlier I brought up that even the most brainrotted Zoomer is probably not going to use “sewer slide” at an eulogy. That’s because they understand the situation and know that the deceased5 and other funeral attendees would be seriously offended by their using that word. They consciously choose to use “suicide”, a word they’re less used to, specifically in order to show respect to everyone else. That suggests that Zoomers do know when algospeak can be disrespectful and will refrain from using it in those contexts. Instead of opposing all of algospeak, like TikTok itself does, we ought to trust them to regulate their own usage of it. We can always speak up if a Zoomer really does use algospeak inappropriately.
Maybe you still think there’s something wrong with algospeak. There might be a good aesthetic case against it, given the revulsion it seems to invoke in many people. (If so, your case would probably also condemn Cockney rhyming slang, which is very sad.) Maybe you hate all slang equally, so you aren’t moved by my argument that algospeak is no worse than 19th-century working-class Englishman slang. And of course you might have other criticisms I haven’t talked about here. That’s perfectly okay. You can criticize algospeak all you want, and I’d love to hear those criticisms. Just please use better arguments than these three when doing so.
Some might even argue that there’s more than one important difference between a government that tortures dissidents to death and a social media company. But I’m trying to keep the scope of this piece modest, so I won’t argue for that here.
Everyone has their own impression of how serious or silly a particular slang term is, of course. But even if you take “unalive” to be a lot sillier than “whacked”, it’s absurd to draw a line in the sand and say “slang terms are only allowed to be THIS amount of silly before they get cancelled!”
Do let me know in the comments if you’re a teacher or professor and have been given an essay with non-ironic use of “sewer slide” or similar. I’d be interested in what happened there.
Not wanting to silence them is why we should be particularly interested in finding ways to avoid the algorithms that do so.
I actually think there’s a good case for not caring about what dead people would think, because they’re dead. I’ll probably write on that in a future article, but whatever the answer is, my point in this one stands.
I think the main thing that bothers me about this specific set of euphemisms is that they originally developed in response to external pressures on our linguistic practices that were never fully understood, and even now are so opaque that it's unclear whether any of the changes actually serve their stated purpose or are necessary at all. A lot of traditional slang can be grating, but it at least has the pretense of having developed organically as an authentic expression of the way people speak. But here, it feels more like a top-down imposition that isn't even really justified (or at least hasn't been shown to be justified). So I agree the censorship criticism and the disrespect criticism are, by themselves, not convincing. But I think there's a legitimately unpleasant reality somewhere between the two - something like "It bothers me when people alter the way they speak about important topics so willingly solely to avoid arbitrary rules they don't fully understand."
This has me convinced. You could even say it's in the rich tradition of thieves cant and Polari, which also allowed people to talk about forbidden topics.