Defame and BlameThe Internet makes it cheap to damage someone else's reputation without risking your own. The asymmetry between the low cost of spreading false information and the high cost to victims of such attacks is an economic and architectural failure, an unintended consequence of a communications infrastructure that's nominally decentralized while actually largely centralized under the control of a few advertising-based companies.
We do not hear a lot of discussion of harassment and defamation as either an economic failure or an engineering failure. Instead, we hear that online harassment is sad but inevitable, or that it happens "because people suck." As Anil Dash wrote, "don't read the comments" normalizes the expectation that behavior online will sink to the lowest common denominator and stay there. People seem to take a similar approach to outright harassment as they do to comments expressing bad opinions.
The cases I'm talking about, like the defamation of Kathy Sierra or the Gamergate coordinated harassment campaign, are effective because of their use of proxy recruitment. Effective propagandists who have social capital have learned how to recruit participants for their harassment campaigns: by coming up with a good lie and relying on network effects to do the rest of the work. Spreading false information about a woman -- particularly a woman who is especially vulnerable because of intersecting marginalized identities -- is easy because it confirms sexist biases (conscious or not-so-conscious) that we all have. Since most of us have internalized the belief that women are less competent, convincing people that a woman slept her way to the top doesn't take much effort.
"Don't read the comments" isn't good advice for people who are merely being pestered. (And anyway, we might question the use of the word "merely", since having to manage a flood of unwanted comments in order to receive desired communication tends to have a certain isolating effect on a person.) But it's especially bad advice for people who are being defamed. What good does it do to ignore the people spreading lies about you when ignoring them won't change what set of web pages a search engine returns as the top ten hits for your name? When you advise targets of harassment to "ignore it" or to "not feed the trolls", you shift responsibility onto victims and away from the people who benefit from the spread of false information (and I don't just mean the people who initiate harassment campaigns). In short, you blame victims.
Algorithms, Advertising, and AccountabilityWe neither talk much about the democratization of defamation, nor know how to mitigate it. It happens for a reason. Online harassment and defamation campaigns are an inevitable consequence of a telecommunications infrastructure that is dominated by for-profit advertising-supported businesses governed by algorithms that function autonomously. However, neither the autonomy of algorithms nor the ad-supported business model that most social media and search engine companies share is inevitable. Both are a result of decisions made by people, and both can be changed if people have the will to do so. The combination of ads and unsupervised algorithms currently defines the political economy of telecommunications, but it's by no means inevitable, natural, or necessary.
Broadcast television is, or was, advertising-supported, but it didn't lend itself to harassment and defamation nearly as easily, since a relatively small group of people had access to the megaphone. Likewise, online services don't have to encourage bad-faith speech, and discouraging it doesn't necessarily require a huge amount of labor: for example, eBay functions with minimal human oversight by limiting its feedback function to comments that go with an actual financial transaction. However, online search engines and recommendation systems typically use an advertising-based business model where customers pay for services with their attention rather than with money, and typically function with neither human supervision nor any design effort paid to discouraging defamation. Because of these two properties, it's relatively easy for anyone who's sufficiently determined to take control of what shows up when somebody looks up your name in the global distributed directory known as your favorite popular search engine -- that is, as long as you can't afford the public relations apparatus it takes to guard against such attacks. Harassment campaigns succeed to the extent that they exploit the ad-based business model and the absence of editorial oversight that characterize new media.
What This Article is Not AboutThree topics I'm not addressing in this essay are:
- Holding public figures accountable. When people talk about wanting to limit access to the megaphone that search engines make freely available to sufficiently persistent individuals, a common response is, "Are you saying you want to limit people's ability to hold powerful people accountable?" I think it's important for private citizens to be able to use the Internet to expose wrongdoing by powerful people, such as elected officials. I don't agree with the assumption behind this question: the assumption that private citizens ought to be exposed to the same level of public scrutiny as public figures are.
- "Public shaming." What some people call "public shaming" refers to criticism of a person for a thing that person actually said. When Jon Ronson wrote about Justine Sacco getting "publicly shamed", he didn't mean that people falsely accused her of using her public platform to make a joke at the expense of people with AIDS. He and Sacco's critics agree that she did freely choose to make that joke. I'm talking about something different: when people use technology to construct a false narrative that portrays their adversary as having said something the adversary didn't say. This is not an article about "public shaming".
The difference between defamation and shaming is defamation is defined by the behavior of the subject rather than the emotional reaction of the object; the latter sort of rests on this idea that it's wrong to make certain people feel certain ways, and I don't agree with that idea.
- Censorship. I'm not advocating censorship when I ask how we entered a technological regime in which quality control for information retrieval algorithms is difficult or impossible without suppressing legitimate speech. I'm pointing out that we've designed ourselves into a system where no fine distinctions are possible, and the rapid dissemination of lies can't be curtailed without suppressing truth. As Sarah Jeong points out in her book The Internet of Garbage, the belief that discouraging harassment means encouraging censorship is founded on the false assumption that addressing harassment online means suppressing or deleting content. In fact, search engines already filter, prioritize, and otherwise implement heuristics about information quality. Some of the same technologies could be used to -- in Jeong's words -- dampen harassment and protect the targets of harassment. If you object to that, then surely you also object to the decisions encoded in information retrieval algorithms about what documents are most relevant to a query.
What's NextSo far, I've argued that social network infrastructure has two design flaws which serve to amplify rather than dampening harassment:
- Lack of editorial oversight means that the barrier to entry to publishing has changed from being a journalist (while journalists have never been perfect, at least they're members of a profession with standards and ethics) to being someone with a little charisma and a lot of free time.
- Advertising-supported business models means that a mildly charismatic, very bored antihero can find many bright people eager to help disseminate their lies because lies are provocative and provocative stories get clicks.
In the next three installments, I'll elaborate on how we got into this situation and what we could do to change it.
Do you like this post? Support me on Patreon and help me write more like it.