tim: "System Status: Degraded" (degraded)
[CW: suicide]

Elizabeth Waite was a trans woman who committed suicide last week. I did not know Elizabeth, but several of my friends did. In an article for the Daily Beast, Ben Collins described what happened after she died (CW if you follow the link to the article: it quotes extremely transmisogynistic and violent comments and images, including some that incite suicide.)


The night the article describes, I sat in my office after work with Elizabeth's profile open in a tab, watching the stream of hateful comments pour in almost faster than I could report them to Facebook. My friends had mentioned that members of an online forum known for terrorizing autistic trans women were flooding her profile (particularly her last post, in which she stated her intention to commit suicide) with hateful comments. Since I didn't know Elizabeth and wasn't emotionally affected by reading these comments in the same way that I would have been if I had known her, I felt that bearing witnesses and reporting the comments as abuse was work that I could usefully do. Since many of the comments were obviously from fake accounts, and Facebook is well-known for its desire for good data (read: monetizable data), specifically accounts attached to the names people use in everyday life, I reported those accounts as fake as well.

And later that night, I watched dozens and dozens of emails fill my inbox that were automated responses from Facebook's abuse reporting system. Most of the responses said this:


Thank you for taking the time to report something that you feel may violate our Community Standards. Reports like yours are an important part of making Facebook a safe and welcoming environment. We reviewed the comment you reported for displaying hate speech and found it doesn't violate our Community Standards.
Please let us know if you see anything else that concerns you. We want to keep Facebook safe and welcoming for everyone.


screenshot of the quoted text

Because the posts in question were eventually made private, I can't quote the comments about which a Facebook content reviewer said "it doesn't violate our Community Standards", and in fairness to the person or people reviewing the comments, some of the comments weren't obviously hate speech without the context that they were in a thread of people piling on a dead trans woman. Facebook lacks a way to report abuse that goes beyond "the text of this individual comment, in the absence of context, violates Facebook's Community Standards." That's part of the problem. If trans people were in positions of power at Facebook, you can bet that there would be a "report transmisogynist hate mob" button that would call attention to an entire thread in which an individual was being targeted by a coordinated harassment campaign.

Likewise, even though Facebook is notorious for harassing trans people for using the names we use in everyday life as our account names, when I reported an account with the name "Donny J. Trump" for impersonation, I got an email back saying that the account would not be suspended because it wasn't impersonating anybody:

screenshot of the aforementioned text

Facebook's tools don't address this problem. Imagine you're the family member of a trans woman who just died and whose profile is receiving a flood of hateful comments. Dozens of users are posting these comments -- too many to block, and anyway, what good would blocking do if you don't have access to the deceased person's account password? The comments would still be there, defacing that person's memory. Reporting individual comments has no effect if the harassment is conducted by posting a series of memes that aren't necessarily offensive on their own, but have the effect of demeaning and belittling a person's death when posted as comments in response to a suicide note. And getting an account converted to a "memorial account" -- which allows someone else to administer it -- can take days, which doesn't help when the harassment is happening right now. Again: you can look at Facebook and know that it's a company in which the voices of people who worry about questions like, "when I die, will people on an Internet forum organize a hate mob to post harmful comments all over my public posts?" are not represented.

But Facebook doesn't even do what they promise to do: delete individual comments that clearly violate their community standards:

Facebook removes hate speech, which includes content that directly attacks people based on their:

Race,
Ethnicity,
National origin,
Religious affiliation,
Sexual orientation,
Sex, gender, or gender identity, or
Serious disabilities or diseases.


Out of the many comments in the threads on Elizabeth Waite's profile that clearly attacked people based on their gender identity or disability, most were deemed by Facebook as "doesn't violate our Community Standards."

At this point, Facebook ought to just stop pretending to have an abuse reporting system, because what they promise to do has nothing to do with what they will actually do. Facebook's customers are advertisers -- people like you and me who produce content that helps Facebook deliver an audience for advertisers (you might think of us as "users") are the raw material, not the customers. Even so, it's strange that companies that pay for advertising on Facebook don't care that Facebook actively enables this kind of harassment.

If you read the Daily Beast article, you'll also notice that Facebook was completely unhelpful and unwilling to stop the abuse other than in a comment-by-comment way until one of the family members found a laptop that still had a login cookie for Elizabeth's account -- they wouldn't memorialize it or do anything else to stop the abuse wholesale in a timely fashion. What would have happened if the cookie had already expired?

Like anybody else, trans people die for all kinds of reasons. In an environment where hate speech is being encouraged from the highest levels of power, this is just going to keep happening more and more. Facebook will continue to refuse to do anything to stop it, because hate speech doesn't curtail their advertising revenue. In fact, as I wrote about in "The Democratization of Defamation", the economic incentives that exist encourage companies like Facebook to potentiate harassment, because more harassment means more impressions.

Although it's clearly crude economics that make Facebook unwilling to invest resources in abuse prevention, a public relations person at Facebook would probably tell you that they are reluctant to remove hate speech because of concern for free speech. Facebook is not a common carrier and has no legal (or moral) obligation to spend money to disseminate content that isn't consistent with its values as a business. Nevertheless, think about this for a moment: in your lifetime, you will probably have to see a loved one's profile get defaced like this and know that Facebook will do nothing about it. Imagine a graveyard that let people spray paint on tombstones and then stopped you from washing the paint off because of free speech.

What responsibilities do social media companies -- large ones like Facebook that operate as completely unregulated public utilities -- have to their users? If you'd like, you can call Facebook's billions of account holders "content creators"; what responsibilities do they have to those of us who create the content that Facebook uses for delivering an audience to advertisers?

Facebook would like you to think that they give us access to their site for free because they're nice people and like us, but corporations aren't nice people and don't like you. The other viewpoint you may have heard is: "If you're not paying for the product, then you are the product." Both of these stories are too simplistic. If you use Facebook, you do pay for it: with the labor you put into writing status updates and comments (without your labor, Facebook would have nothing to sell to advertisers) and with the attention you give to ads (even if you never click on an ad).

If you're using something that's being given away for free, then the person giving it away has no contractual obligations to you. Likewise, if you are raw material, than the people turning you into gold have no contractual obligations to you. But if you're paying to use Facebook -- and you are, with your attention -- that creates a buyer/seller relationship. Because this relationship is not formalized, you as the buyer assume all the risks in the transaction while the seller reaps all of the economic benefit.


Do you like this post? Support me on Patreon and help me write more like it. In December 2016, I'll be donating all of my Patreon earnings to the National Network of Abortion Funds, so if you'd like to show your support, you can also make a one-time or recurring donation to them directly.

tim: "System Status: Degraded" (degraded)
This post is the second in a 4-part series. The first part was "Defame and Blame". The next part is "Server-Side Economics."

Phone Books and Megaphones

Think back to 1986. Imagine if somebody told you: "In 30 years, a public directory that's more accessible and ubiquitous than the phone book is now will be available to almost everybody at all times. This directory won't just contain your contact information, but also, a page anyone can write on, like a middle-school slam book but meaner. Whenever anybody writes on it, everybody else will be able to see what they wrote." I don't thin you would have believed it, or if you found it plausible, you probably wouldn't have found this state of affairs acceptable. Yet in 2016, that's how things are. Search engine results have an enormous effect on what people believe to be true, and anybody with enough time on their hands can manipulate search results.

Antisocial Network Effects

When you search for my name on your favorite search engine, you'll find some results that I wish weren't closely linked to my name. People who I'd prefer not to think about have written blog posts mentioning my name, and those articles are among the results that most search engines will retrieve if you're looking for texts that mention me. But that pales in comparison with the experiences of many women A few years ago, Skud wrote:

"Have you ever had to show your male colleagues a webpage that calls you a fat dyke slut? I don’t recommend it."

Imagine going a step further: have you ever had to apply for jobs knowing that if your potential manager searches for your name online, one of the first hits will be a page calling you a fat dyke slut? In 2016, it's pretty easy for anybody who wants to to make that happen to somebody else, as long as the target isn't unusually wealthy or connected. Not every potential manager is going to judge someone negatively just because someone called that person a fat dyke slut on the Internet, and in fact, some might judge them positively. But that's not the point -- the point is if you end up in the sights of a distributed harassment campaign, then one of the first things your potential employers will know about you, possibly for the rest of your life, might be that somebody called you a fat dyke slut. I think most of us, if we had the choice, wouldn't choose that outcome.

Suppose the accusation isn't merely a string of generic insults, but something more tangible: suppose someone decides to accuse you of having achieved your professional position through "sleeping your way to the top," rather than merit. This is a very effective attack on a woman's credibility and competence, because patriarchy primes us to be suspicious of women's achievements anyway. It doesn't take much to tip people, even those who don't consciously hold biases against women, into believing these attacks, because we hold unconscious biases against women that are much stronger than anyone's conscious bias. It doesn't matter if the accusation is demonstrably false -- so long as somebody is able to say it enough times, the combination of network effects and unconscious bias will do the rest of the work and will give the rumor a life of its own.

Not every reputation system has to work the way that search engines do. On eBay, you can only leave feedback for somebody else if you've sold them something or bought something from them. In the 17 years since I started using eBay, that system has been very effective. Once somebody accumulates social capital in the form of positive feedback, they generally don't squander that capital. The system works because having a good reputation on eBay has value, in the financial sense. If you lose your reputation (by ripping somebody off), it takes time to regain it.

On the broader Internet, you can use a disposable identity to generate content. Unlike on eBay, there is no particular reason to use a consistent identity in order to build up a good track record as a seller. If your goal is to build a #personal #brand, then you certainly have a reason to use the same name everywhere, but if your goal is to destroy someone else's, you don't need to do that. The ready availability of disposable identities ("sockpuppets") means that defaming somebody is a low-risk activity even if your accusations can be demonstrated false, because by the time somebody figures out you made your shit up, you've moved on to using a new name that isn't sullied by a track record of dishonesty. So there's an asymmetry here: you can create as many identities as you want, for no cost, to destroy someone else's good name, but having a job and functioning in the world makes it difficult to change identities constantly.

The Megaphone

For most of the 20th century, mass media consisted of newspapers, then radio and then TV. Anybody could start a newspaper, but radio and TV used the broadcast spectrum, which is a public and scarce resource and thus is regulated by governmental agencies. Because the number of radio and TV channels was limited, telecommunications policy was founded on the assumption that some amount of regulation of these channels' use was necessary and did not pose an intrinsic threat to free speech. The right to use various parts of the broadcast spectrum was auctioned off to various private companies, but this was a limited-scope right that could be revoked if those companies acted in a way that blatantly contravened the public interest. A consistent pattern of deception would have been one thing that went against the public interest. As far as I know, no radio or TV broadcaster ever embarked upon a deliberate campaign of defaming multiple people, because the rewards of such an activity wouldn't offset the financial losses that would be inevitably incurred when the lies were exposed.

(I'll use "the megaphone" as a shorthand for media that are capable of reaching a lot of people: formerly, radio and broadcast TV; then cable TV; and currently, the Internet. Not just "the Internet", though, but rather: Internet credibility. Access to the credible Internet (the content that search engine relevance algorithms determine should be centered in responses to queries) is gatekept by algorithms; access to old media was gatekept by people.)

At least until the advent of cable TV, then, the broader the reach of a given communication channel, the more closely access to that channel was monitored and regulated. It's not that this system always worked perfectly, because it didn't, just that there was more or less consensus that it was correct for the public to have oversight with respect to who could be entrusted with access to the megaphone.

Now that access to the Internet is widespread, the megaphone is no longer a scarce resource. In a lot of ways, that's a good thing. It has allowed people to speak truth to power and made it easier for people in marginalized groups to find each other. But it also means that it's easy to start a hate campaign based on falsehoods without incurring any personal risk.

I'm not arguing against anonymity here. Clearly, at least some people have total freedom to act in bad faith while using the names they're usually known by: Milo Yiannopoulos and Andrew Breitbart are obvious examples. If use of real names deters harassment, why are they two of the best-known names in harassment?

Algorithm as Excuse

Zoë Quinn pointed out on Twitter that she can no longer share content with her friends, even if she limits access to it, because her name is irrevocably linked to the harassment campaign that her ex-boyfriend started in order to defame her in 2014, otherwise known as GamerGate. If she uses YouTube to share videos, its recommendation engine will suggest to her friends that they watch "related" videos that -- at best -- attack her for her gender and participation in the game development community. There is no individual who works for Google (YouTube's parent company) who made an explicit decision to link Quinn's name with these attacks. Nonetheless, a pattern in YouTube's recommendations emerged because of a concerted effort by a small group of dedicated individuals to pollute the noosphere in order to harm Quinn. If you find this outcome unacceptable, and I do, we have to consider the chain of events that led to it and ask which links in the chain could be changed so this doesn't happen to someone else in the future.

There is a common line of response to this kind of problem: "You can't get mad at algorithms. They're objective and unbiased." Often, the implication is that the person complaining about the problem is expecting computers to be able to behave sentiently. But that's not the point. When we critique an algorithm's outcome, we're asking the people who design and maintain the algorithms to do better, whether the outcome is that it uses too much memory or that it causes a woman to be re-victimized every time someone queries a search engine for her name. Everything an algorithm does is because of a design choice that one or several humans made. And software exists to serve humans, not the other way around: when it doesn't do what we want, we can demand change, rather than changing ourselves so that software developers don't have to do their jobs. By saying "it's just an algorithm", we can avoid taking responsibility for our values as long as we encode those values as a set of rules executable by machine. We can automate disavowal.

How did we get here -- to a place where anyone can grab the megaphone, anyone can scribble in the phone book, and people who benefit from the dissemination of this false information are immune from any of the risks? I'll try to answer that in part 3.

To be continued.


Do you like this post? Support me on Patreon and help me write more like it.

tim: "System Status: Degraded" (degraded)
This post is the first in a 4-part series. Part 2 is "Phone Books and Megaphones."

Defame and Blame

The Internet makes it cheap to damage someone else's reputation without risking your own. The asymmetry between the low cost of spreading false information and the high cost to victims of such attacks is an economic and architectural failure, an unintended consequence of a communications infrastructure that's nominally decentralized while actually largely centralized under the control of a few advertising-based companies.

We do not hear a lot of discussion of harassment and defamation as either an economic failure or an engineering failure. Instead, we hear that online harassment is sad but inevitable, or that it happens "because people suck." As Anil Dash wrote, "don't read the comments" normalizes the expectation that behavior online will sink to the lowest common denominator and stay there. People seem to take a similar approach to outright harassment as they do to comments expressing bad opinions.

The cases I'm talking about, like the defamation of Kathy Sierra or the Gamergate coordinated harassment campaign, are effective because of their use of proxy recruitment. Effective propagandists who have social capital have learned how to recruit participants for their harassment campaigns: by coming up with a good lie and relying on network effects to do the rest of the work. Spreading false information about a woman -- particularly a woman who is especially vulnerable because of intersecting marginalized identities -- is easy because it confirms sexist biases (conscious or not-so-conscious) that we all have. Since most of us have internalized the belief that women are less competent, convincing people that a woman slept her way to the top doesn't take much effort.

"Don't read the comments" isn't good advice for people who are merely being pestered. (And anyway, we might question the use of the word "merely", since having to manage a flood of unwanted comments in order to receive desired communication tends to have a certain isolating effect on a person.) But it's especially bad advice for people who are being defamed. What good does it do to ignore the people spreading lies about you when ignoring them won't change what set of web pages a search engine returns as the top ten hits for your name? When you advise targets of harassment to "ignore it" or to "not feed the trolls", you shift responsibility onto victims and away from the people who benefit from the spread of false information (and I don't just mean the people who initiate harassment campaigns). In short, you blame victims.

Algorithms, Advertising, and Accountability

We neither talk much about the democratization of defamation, nor know how to mitigate it. It happens for a reason. Online harassment and defamation campaigns are an inevitable consequence of a telecommunications infrastructure that is dominated by for-profit advertising-supported businesses governed by algorithms that function autonomously. However, neither the autonomy of algorithms nor the ad-supported business model that most social media and search engine companies share is inevitable. Both are a result of decisions made by people, and both can be changed if people have the will to do so. The combination of ads and unsupervised algorithms currently defines the political economy of telecommunications, but it's by no means inevitable, natural, or necessary.

Broadcast television is, or was, advertising-supported, but it didn't lend itself to harassment and defamation nearly as easily, since a relatively small group of people had access to the megaphone. Likewise, online services don't have to encourage bad-faith speech, and discouraging it doesn't necessarily require a huge amount of labor: for example, eBay functions with minimal human oversight by limiting its feedback function to comments that go with an actual financial transaction. However, online search engines and recommendation systems typically use an advertising-based business model where customers pay for services with their attention rather than with money, and typically function with neither human supervision nor any design effort paid to discouraging defamation. Because of these two properties, it's relatively easy for anyone who's sufficiently determined to take control of what shows up when somebody looks up your name in the global distributed directory known as your favorite popular search engine -- that is, as long as you can't afford the public relations apparatus it takes to guard against such attacks. Harassment campaigns succeed to the extent that they exploit the ad-based business model and the absence of editorial oversight that characterize new media.

What This Article is Not About

Three topics I'm not addressing in this essay are:
  • Holding public figures accountable. When people talk about wanting to limit access to the megaphone that search engines make freely available to sufficiently persistent individuals, a common response is, "Are you saying you want to limit people's ability to hold powerful people accountable?" I think it's important for private citizens to be able to use the Internet to expose wrongdoing by powerful people, such as elected officials. I don't agree with the assumption behind this question: the assumption that private citizens ought to be exposed to the same level of public scrutiny as public figures are.
  • "Public shaming." What some people call "public shaming" refers to criticism of a person for a thing that person actually said. When Jon Ronson wrote about Justine Sacco getting "publicly shamed", he didn't mean that people falsely accused her of using her public platform to make a joke at the expense of people with AIDS. He and Sacco's critics agree that she did freely choose to make that joke. I'm talking about something different: when people use technology to construct a false narrative that portrays their adversary as having said something the adversary didn't say. This is not an article about "public shaming".

    The difference between defamation and shaming is defamation is defined by the behavior of the subject rather than the emotional reaction of the object; the latter sort of rests on this idea that it's wrong to make certain people feel certain ways, and I don't agree with that idea.

  • Censorship. I'm not advocating censorship when I ask how we entered a technological regime in which quality control for information retrieval algorithms is difficult or impossible without suppressing legitimate speech. I'm pointing out that we've designed ourselves into a system where no fine distinctions are possible, and the rapid dissemination of lies can't be curtailed without suppressing truth. As Sarah Jeong points out in her book The Internet of Garbage, the belief that discouraging harassment means encouraging censorship is founded on the false assumption that addressing harassment online means suppressing or deleting content. In fact, search engines already filter, prioritize, and otherwise implement heuristics about information quality. Some of the same technologies could be used to -- in Jeong's words -- dampen harassment and protect the targets of harassment. If you object to that, then surely you also object to the decisions encoded in information retrieval algorithms about what documents are most relevant to a query.

What's Next

So far, I've argued that social network infrastructure has two design flaws which serve to amplify rather than dampening harassment:
  • Lack of editorial oversight means that the barrier to entry to publishing has changed from being a journalist (while journalists have never been perfect, at least they're members of a profession with standards and ethics) to being someone with a little charisma and a lot of free time.
  • Advertising-supported business models means that a mildly charismatic, very bored antihero can find many bright people eager to help disseminate their lies because lies are provocative and provocative stories get clicks.

In the next three installments, I'll elaborate on how we got into this situation and what we could do to change it.


Do you like this post? Support me on Patreon and help me write more like it.

Profile

tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Tim Chevalier

November 2021

S M T W T F S
 123456
78 910111213
14151617181920
21222324252627
282930    

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags