tim: text: "I'm not offended, I'm defiant" (defiant)
Much of the conflict between "social justice warriors" and their antagonists arises from a conflict between mutual trust as a political foundation, and coercion (arising from distrust) as a political tactic. (I previously wrote about this conflict in "The Christians and the Pagans".)

People who are used to operating on coercion assume the worst of others and both expect to be coerced into doing good, and expect that they will have to coerce others in order to get what they want or need. People who are more used to operating on trust assume that others will usually want to help and will act in good faith out of a similar desire for mutual trust.

I want to be clear that when I talk about coercion-based people, I'm not talking about sociopaths or any other category that's constructed based on innate neurological or psychological traits. In fact, people might act coercion-based in one situation, and trust-based in another. For example, a white feminist might act like they're trust-based in a situation that involves gender inequality, but coercion-based when it comes to examining racism. And I'm also not saying people never cross over from one group into another -- I think it can happen in both directions. But to stop relying on coercion requires work, and there are few incentives to do that work. There are, however, a lot of incentives to give up trust in favor of coercion (or at least pretend to) and give up your empathy.

If you assume the worst of other people, of course you won't be able to imagine any way to achieve your goals other than coercion. Assuming the worst isn't a character flaw -- it's taught, and thus, can be unlearned. At the same time, experience isn't an excuse for treating others badly (and people who assume the worst of others will treat others badly, partly because it helps make their assumptions self-fulfilling, removing the need for them to change their assumptions and behavior). We are all obligated to do the work that it takes to live with others while minimizing the harm that we do to them.

Read more... )


Do you like this post? Support me on Patreon and help me write more like it.

tim: "System Status: Degraded" (degraded)
CW: discussion of abuse, gaslighting, and silencing of abuse survivors
"I feel like a thing non-queer ppl seem to often not get is the importance of protecting children from their parents" -- [twitter.com profile] mcclure111 on Twitter
I was glad to read this tweet by [twitter.com profile] mcclure111 because it's a truth that's deeply known by many of us who are queer, or abuse survivors, or both. It's a truth that's as rarely stated as it is deeply known.

But the tweet provoked as much discomfort in others as relief in me. This reply is a representative example of the things people say to survivors speaking uncomfortable truths:

"(kids definitely need protecting from parental harm, but many parents I know, including my own, are Really Good)"

"Many parents are good" is a statement devoid of denotation. When somebody utters a sequence of words that say nothing, I have to ask what they are trying to do by saying those words. Are they trying to take control of the conversation? Are they putting the speaker in their place? Are they expressing discomfort at having their belief in a just world disrupted? Whatever the motivation, direct verbal communication isn't it.

"Many parents... are Really Good" may seem shallow and obvious, but when I ask what those words do rather than what they mean, there's a lot to unpack. Ultimately, "many parents are good" has little to do with the character of the unnamed individuals being defended and much to do with defending the practice of authoritarian parenting.

Read more... )


Thanks to the people who read a draft of this post and contributed feedback that helped me make it better, particularly [twitter.com profile] alt_kia.
Do you like this post? Support me on Patreon and help me write more like it.

tim: text: "I'm not offended, I'm defiant" (defiant)
The question of whether "male" means something different from "man", and whether "female" means something different from "woman", has come up in two different situations for me in the past few weeks. I like being able to hand people a link rather than restating the same thing over and over, so here's a quick rundown of why I think it's best to treat "male" as the adjectival form of "man" and "female" as the adjectival form of "woman".

I prioritize bodily autonomy and self-definition. Bodily autonomy means people get to relate to their bodies in the way that they choose; if we're to take bodily autonomy seriously, respecting self-definition is imperative. If you use language for someone else's body or parts thereof that that person wouldn't use for themselves, you are saying that you know better than they do how they should relate to their body.

For example: I have a uterus, ovaries, and vagina, and they are male body parts, because I'm male. Having been coercively assigned female at birth doesn't change the fact that I've always been male. Having an XX karyotype doesn't make me female (I'm one of the minority of people that actually knows their karyotype, because I've had my DNA sequenced). Those are male chromosomes for me, because they're part of me and I'm male. If I ever get pregnant and give birth, I'll be doing that as a male gestator.

I don't know too many people who would want to be referred to as a male woman or a female man, so i'm personally going to stick to using language that doesn't define people by parts of their bodies that are private. And no, you can't claim parts of my body are "female" without claiming I am - if they're female, whose are they? Not mine.

If someone does identify as a male woman or as a female man, cool. The important thing is that we use those words to describe them because those are the words they use to describe themself rather than because of what sociopolitical categories we place them in based on their body parts.

For extra credit, explain why the widespread acceptance of the sex-vs.-gender binary is the worst thing that ever happened to transsexual people.

Further reading: [personal profile] kaberett, Terms you don't get to describe me in, #2: female-bodied.
Do you like this post? Support me on Patreon and help me write more like it.
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Bunnies in wine glasses!

(from [tumblr.com profile] xxdaybreak)

Now that I've got your attention: my friend Erica is raising money for much-needed trauma therapy and could use your help. I've known her IRL for ten years and can vouch for her as much as I can for anyone in the world; she's a real person and the money will go to do what it says on the tin. Erica is someone who's supported me in a myriad of ways, and I'm not the only one, so if you help her, you'll be helping me. She just needs $145 more in order to meet her goal.

If you have a couple bucks to spare: do it to support an intersectional social justice writer, do it to support a disabled queer trans woman of color, do it to redistribute wealth, or just do it because that would make me happy. Here's the link to her fundraiser. I reserve the right to keep nagging you all until she meets her goal.

Edit: Erica reached her goal! Thanks to those who donated.
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Here's a bunny!

(source: [twitter.com profile] carrot666 by way of [tumblr.com profile] kaberabbits)

Now that I've got your attention: my friend Erica is raising money for much-needed trauma therapy and could use your help. I've known her IRL for ten years and can vouch for her as much as I can for anyone in the world; she's a real person and the money will go to do what it says on the tin. Erica is someone who's supported me in a myriad of ways, and I'm not the only one, so if you help her, you'll be helping me.

If you have a couple bucks to spare: do it to support an intersectional social justice writer, do it to support a disabled queer trans woman of color, do it to redistribute wealth, or just do it because that would make me happy. Here's the link to her fundraiser. I reserve the right to keep nagging you all until she meets her goal.
tim: "System Status: Degraded" (degraded)
This post is the last in a 4-part series. The first three parts were "Defame and Blame", "Phone Books and Megaphones," and "Server-Side Economics."

Harassment as Externality

In part 3, I argued that online harassment is not an accident: it's something that service providers enable because it's profitable for them to let it happen. To know how to change that, we have to follow the money. There will be no reason to stop abuse online as long as advertisers are the customers of the services we rely on. To enter into a contract with a service you use and expect that the service provider will uphold their end of it, you have to be their customer, not their product. As their product, you have no more standing to enter into such a contract than do the underground cables that transmit content.

Harassment, then, is good for business -- at least as long as advertisers are customers and end users are raw material. If we want to change that, we'll need a radical change to the business models of most Internet companies, not shallow policy changes.

Deceptive Advertising

Why is false advertising something we broadly disapprove of -- something that's, in fact, illegal -- but spreading false information in order to entice more eyeballs to view advertisements isn't? Why is it illegal to run a TV ad that says "This toy will run without electricity or batteries," but not illegal for a social media site to surface the message, "Alice is a slut, and while we've got your attention, buy this toy?" In either case, it's lying in order to sell something.

Advertising will affect decision-making by Internet companies as long as advertising continues to be their primary revenue source. If you don't believe in the Easter Bunny, you shouldn't believe it either when executives tell you that ad money is a big bag of cash that Santa Claus delivers with no strings attached. Advertising incentivize ad-funded media to do whatever gets the most attention, regardless of truth. The choice to do what gets the most attention has ethical and political significance, because achieving that goal comes at the expense of other values.

Should spreading false information have a cost? Should dumping toxic waste have a cost? They both cost money and time to clean up. CDA 230 protects sites that profit from user-generated content from liability from paying any of the costs of that content, and maybe it's time to rethink that. A search engine is not like a common carrier -- one of the differences is that it allows one-to-many communication. There's a difference between building a phone system that any one person can use to call anyone else, and setting up an autodialer that lets the lucky 5th callee record a new message for it.

Accountability and Excuses

"Code is never neutral; it can inhibit and enhance certain kinds of speech over others. Where code fails, moderation has to step in."
-- Sarah Jeong, The Internet of Garbage
Have you ever gone to the DMV or called your health insurance company and been told "The computer is down" when, you suspected, the computer was working fine and it just wasn't in somebody's interest to help you right now? "It's just an algorithm" is "the computer is down," writ large. It's a great excuse for failure to do the work of making sure your tools don't reproduce the same oppressive patterns that characterize the underlying society in which those tools were built. And they will reproduce those patterns as long as you don't actively do the work of making sure they don't. Defamation and harassment disproportionately affect the most marginalized people, because those are exactly the people that you can bully with few or no consequences. Make it easier to harass people, to spread lies about them, and you are making it easier for people to perpetuate sexism and racism.

There are a number of tools that technical workers can use to help mitigate the tendency of the communities and the tools that they build to reproduce social inequality present in the world. Codes of conduct are one tool for reducing the tendency of subcultures to reproduce inequality that exists in their parent culture. For algorithms, human oversight could do the same -- people could regularly review search engine results in a way that includes verifying factual claims that are likely to have a negative impact on a person's life if the claims aren't true. It's also possible to imagine designing heuristics that address the credibility of a source rather than just its popularity. But all of this requires work, and it's not going to happen unless tech companies have an incentive to do that work.

A service-level agreement (SLA) is a contract between the provider and a service and the services' users that outlines what the users are entitled to expect from the service in exchange for their payment. Because people pay for most Web services with their attention (to ads) rather than with money, we don't usually think about SLAs for information quality. For an SLA to work, we would probably have to shift from an ad-based model to a subscription-based model for more services. We can measure how much money you spend on a service -- we can't measure how much attention you provide to its advertisers. So attention is a shaky basis on which to found a contract. Assuming business models where users pay in a more direct and transparent way for the services they consume, could we have SLAs for factual accuracy? Could we have an SLA for how many death threats or rape threats it's acceptable for a service to transmit?

I want to emphasize one more time that this article isn't about public shaming. The conversation that uses the words "public shaming" is about priorities, rather than truth. Some people want to be able to say what they feel like saying and get upset when others challenge them on it rather than politely ignoring it. When I talk about victims of defamation, that's not who I'm talking about -- I'm talking about people against whom attackers have weaponized online media in order to spread outright lies about them.

People who operate search engines already have search quality metrics. Could one of them be truth -- especially when it comes to queries that impinge on actual humans' reputations? Wikipedia has learned this lesson: its policy on biographies of living persons (BLP) didn't exist from the site's inception, but arose as a result of a series of cases in which people acting in bad faith used Wikipedia to libel people they didn't like. Wikipedia learned that if you let anybody edit an article, there are legal risks; the risks were (and continue to be) especially real for Wikipedia due to how highly many search engines rank it. To some extent, content providers have been able to protect themselves from those risks using CDA 230, but sitting back while people use your site to commit libel is still a bad look... at least if the targets are famous enough for anyone to care about them.

Code is Law

Making the Internet more accountable matters because, in the words of Lawrence Lessig, code is law. Increasingly, software automates decisions that affect our lives. Imagine if you had to obey laws, but weren't allowed to read their text. That's the situation we're in with code.

We recognize that the passenger in a hypothetical self-driving car programmed to run over anything in its path has made a choice: they turned the key to start the machine, even if from then on, they delegated responsibility to an algorithm. We correctly recognize the need for legal liability in this situation: otherwise, you could circumvent laws against murder by writing a program to commit murder instead of doing it yourself. Somehow, when physical objects are involved it's easier to understand that the person who turns the key, who deploys the code, has responsibility. It stops being "just the Internet" when the algorithms you designed and deployed start to determine what someone's potential employers think of them, regardless of truth.

There are no neutral algorithms. An algorithmic blank slate will inevitably reproduce the violence of the social structures in which it is embedded. Software designers have the choice of trying to design counterbalances to structural violence into their code, or to build tools that will amplify structural violence and inequality. There is no neutral choice; all technology is political. People who say they're apolitical just mean their political interests align well with the status quo.

Recommendation engines like YouTube, or any other search engine with relevance metrics and/or a recommendation system, just recognize patterns -- right? They don't create sexism; if they recommend sexist videos to people who aren't explicitly searching for them, that's because sexist videos are popular, right? YouTube isn't to blame for sexism, right?

Well... not exactly. An algorithm that recognizes patterns will recognize oppressive patterns, like the determination that some people have to silence women, discredit them, and pollute their agencies. Not only will it recognize those patterns, it will reproduce those patterns by helping people who want to silence women spread their message, which has a self-reinforcing effect: the more the algorithm recommends the content, the more people will view it, which reinforces the original recommendation. As Sarah Jeong wrote in The Internet of Garbage, "The Internet is presently siloed off into several major public platforms" -- public platforms that are privately owned. The people who own each silo own so many computing resources that competing with them would be infeasible for all but a very few -- thus, the free market will never solve this problem.

Companies like Google say they don't want to "be evil", but intending to "not be evil" is not enough. Google has an enormous amount of power, and little to no accountability -- no one who manages this public resource was elected democratically. There's no process for checking the power they have to neglect and ignore the ways in which their software participates in reproducing inequality. This happened by accident: a public good (the tools that make the Internet a useful source of knowledge) has fallen under private control. This would be a good time for breaking up a monopoly.

Persistent Identities

In the absence of anti-monopoly enforcement, is there anything we can do? I think there is. Anil Dash has written about persistent pseudonyms, a way to make it possible to communicate anonymously online while still standing to lose something of value if you abuse that privilege in order to spread false information. The Web site Metafilter charges a small amount of money to create an account, in order to discourage sockpuppeting (the practice of responding to being banned from a Web site by coming back to create a new account) -- it turns out this approach is very effective, since people who are engaging in harassment for laughs don't seem to value their own laughs very highly in terms of money.

I think advertising-based funding is also behind the reason why more sites don't implement persistent pseudonyms. The advertising-based business model encourages service providers to make it easy as possible for people to use their service; requiring the creation of an identity would put an obstacle in the way of immediate engagement. This is good from the perspective of nurturing quality content, but bad from the perspective that it limits the number of eyeballs that will be focused on ads. And thus, we see another way in which advertising enables harassment.

Again, this isn't a treatise against anonymity. None of what I'm saying implies you can't have 16 different identities for all the communities you participate in online. I am saying that I want it to be harder for you to use one of those identities for defamation without facing consequences.

A note on diversity

Twitter, Facebook, Google, and other social media and search companies are notoriously homogeneous, at least when it comes to their engineering staff and their executives, along gendered and racial lines. But what's funny is that Twitter, Facebook, and other sites that make money by using user-generated content to attract an audience for advertisements, are happy to use the free labor that a diversity of people do for them when they create content (that is, write tweets or status updates). The leaders of these companies recognize that they couldn't possibly hire a collection of writers who would generate better content than the masses do -- and anyway, even if they could, writers usually want to be paid. So they recognize the value of diversity and are happy to reap its benefits. They're not so enthusiastic to hire a diverse range of people, since that would mean sharing profits with people who aren't like themselves.

And so here's a reason why diversity means something. People who build complex information systems based on approximations and heuristics have failed to incorporate credibility into their designs. Almost uniformly, they design algorithms that will promote whatever content gets the most attention, regardless of its accuracy. Why would they do otherwise? Telling the truth doesn't attract an audience for advertisers. On the other hand, there is a limit to how much harm an online service can do before the people whose attention they're trying to sell -- their users -- get annoyed and start to leave. We're seeing that happen with Twitter already. If Twitter's engineers and product designers had included more people in demographics that are vulnerable to attacks on their credibility (starting with women, non-binary people, and men of color), then they'd have a more sustainable business, even if it would be less profitable in the short term. Excluding people on the basis of race and gender hurts everyone: it results in technical decisions that cause demonstrable harm, as well as alienating people who might otherwise keep using a service and keep providing attention to sell to advertisers.

Internalizing the Externalities

In the same way that companies that pollute the environment profit by externalizing the costs of their actions (they get to enjoy all the profit, but the external world -- the government and taxpayers -- get saddled with the responsibility of cleaning up the mess), Internet companies get to profit by externalizing the cost of transmitting bad-faith speech. Their profits are higher because no one expects them to spend time incorporating human oversight into pattern recognition. The people who actually generate bad-faith speech get to externalize the costs of their speech as well. It's the victims who pay.

We can't stop people from harassing or abusing others, or from lying. But we can make it harder for them to do it consequence-free. Let's not let the perfect be the enemy of the good. Analogously, codes of conduct don't prevent bad actions -- rather, they give people assurance that justice will be done and harmful actions will have consequences. Creating a link between actions and consequences is what justice is about; it's not about creating dark corners and looking the other way as bullies arrive to beat people up in those corners.

...the unique force-multiplying effects of the Internet are underestimated. There’s a difference between info buried in small font in a dense book of which only a few thousand copies exist in a relatively small geographic location versus blasting this data out online where anyone with a net connection anywhere in the world can access it.
-- Katherine Cross, "'Things Have Happened In The Past Week': On Doxing, Swatting, And 8chan":
When we protect content providers from liability for the content that they have this force-multiplying effect on, our priorities are misplaced. With power comes responsibility; currently, content providers have enormous power to boost some signals while dampening others, and the fact that these decisions are often automated and always motivated by profit rather than pure ideology doesn't reduce the need to balance that power with accountability.
"The technical architecture of online platforms... should be designed to dampen harassing behavior, while shielding targets from harassing content. It means creating technical friction in orchestrating a sustained campaign on a platform, or engaging in sustained hounding."
-- Sarah Jeong, The Internet of Garbage
That our existing platforms neither dampen nor shield isn't an accident -- dampening harassing behavior would limit the audience for the advertisements that can be attached to the products of that harassing behavior. Indeed, they don't just fail to dampen, they do the opposite: they amplify the signals of harassment. At the point where an algorithm starts to give a pattern a life of its own -- starts to strengthen a signal rather than merely repeating it -- it's time to assign more responsibility to companies that trade in user-generated content than we traditionally have. To build a recommendation system that suggests particular videos are worth watching is different from building a database that lets people upload videos and hand URLs for those videos off to their friends. Recommendation systems, automated or not, create value judgments. And the value judgments they surface have an irrevocable effect on the world. Helping content get more eyeballs is an active process, whether or not it's implemented by algorithms people see as passive.

There is no hope of addressing the problem of harassment as long as it continues to be an externality for the businesses that profit from enabling it. Whether by supporting subscription-based services with our money and declining to give our attention to advertising-based surfaces, or expanding legal liability for the signals that a service selectively amplifies, or by normalizing the use of persistent pseudonyms, people will continue to have their lives limited by Internet defamation campaigns as long as media companies can profit from such campaigns without paying their costs.


Do you like this post? Support me on Patreon and help me write more like it.

tim: "System Status: Degraded" (degraded)
This post is the third in a 4-part series. The first two parts were "Defame and Blame" and "Phone Books and Megaphones."

Server-Side Economics

In "Phone Books and Megaphones", I talked about easy access to the megaphone. We can't just blame the people who eagerly pick up the megaphone when it's offered for the content of their speech -- we also have to look at the people who own the megaphone, and why they're so eager to lend it out.

It's not an accident that Internet companies are loathe to regulate harassment and defamation. There are economic incentives for the owners of communication channels to disseminate defamation: they make money from doing it, and don't lose money or credibility in the process. There are few incentives for the owners of these channels to maintain their reputations by fact-checking the information they distribute.

I see three major reasons why it's so easy for false information to spread:

  • Economic incentives to distribute any information that gets attention, regardless of its truth.
  • The public's learned helplessness in the face of software, which makes it easy for service owners to claim there's nothing they can do about defamation. By treating the algorithms they themselves implemented as black boxes, their designers can disclaim responsibility for the actions of the machines they set into motion.
  • Algorithmic opacity, which keeps the public uninformed about how code works and makes it more likely they'll believe that it's "the computers fault" and people can't change anything.

Incentives and Trade-Offs

Consider email spam as a cautionary tale. Spam and abuse are both economic problems. The problem of spam arose because the person who sends an email doesn't pay the cost of transmitting it to the recipient. This creates an incentive to use other people's resources to advertise your product for free. Likewise, harassers can spam the noosphere with lies, as they continue to do in the context of GamerGate, and never pay the cost of their mendacity. Even if your lies get exposed, they won't be billed to your reputation -- not if you're using a disposable identity, or if you're delegating the work to a crowd of people using disposable identities (proxy recruitment). The latter is similar to how spammers use botnets to get computers around the world to send spam for them, usually unbeknownst to the computers' owners -- except rather than using viral code to co-opt a machine into a botnet, harassers use viral ideas to recruit proxies.

In The Internet of Garbage, Sarah Jeong discusses the parallels between spam and abuse at length. She asks why the massive engineering effort that's been put towards curbing spam -- mostly successfully, at least in the sense of saving users from the time it takes to manually filter spam (Internet service providers still pay the high cost of transmitting it, only to be filtered out at the client side) -- hasn't been applied to the abuse problem. I think the reason is pretty simple: spam costs money, but abuse makes money. By definition, almost nobody wants to see spam (a tiny percentage of people do, which is why it's still rewarding for spammers to try). But lots of people want to see provocative rumors, especially when those rumors reinforce their sexist or racist biases. In "Trouble at the Koolaid Point", Kathy Sierra wrote about the incentives for men to harass women online: a belief that any woman who gets attention for her work must not deserve it, must have tricked people into believing her work has value. This doesn't create an economic incentive for harassment, but it does create an incentive -- meanwhile, if you get more traffic to your site and more advertising money because someone's using it to spread GamerGate-style lies, you're not going to complain. Unless you follow a strong ethical code, of course, but tech people generally don't. Putting ethics ahead of profit would betray your investors, or your shareholders.

If harassment succeeds because there's an economic incentive to let it pass through your network, we have to fight it economically as well. Moralizing about why you shouldn't let your platform enable harassment won't help, since the platform owners have no shame.

Creating these incentives matters. Currently, there's a world-writeable database with everyone's names as the keys, with no accounting and no authentication. A few people control it and a few people get the profits. We shrug our shoulders and say "how can we trace the person who injected this piece of false information into the system? There's no way to track people down." But somebody made the decision to build a system in which people can speak with no incentive to be truthful. Alternative designs are possible.

Autonomous Cars, Autonomous Code

Another reason why there's so little economic incentive to control libel is that the public has a sort of learned helplessness about algorithms... at least when it's "just" information that those algorithms manipulate. We wouldn't ask why a search engine returns the top results that it returns for a particular query (unless we study information retrieval), because we assume that algorithms are objective and neutral, that they don't reproduce the biases of the humans who built them.

In part 2, I talked about why "it's just an algorithm" isn't a valid answer to questions about the design choices that underlie algorithms. We recognize this better for algorithms that aren't purely about producing and consuming information. We recognize that despite being controlled by algorithms, self-driving cars have consequences for legal liability. It's easy to empathize with the threat that cars pose to our lives, and we're correctly disturbed by the idea that you or someone you love could be harmed or killed by a robot who can't be held accountable for it. Of course, we know that the people who designed those machines can be held accountable if they create software that accidentally harms people through bugs, or deliberately harms people by design.

Imagine a self-driving car designer who programmed the machines to act in bad faith: for example, to take risks to get the car's passenger to their destination sooner at the potential expense of other people on the road. You wouldn't say "it's just an algorithm, right?" Now, what if people died due to unforeseen consequences of how self-driving car designers wrote their software rather than deliberate malice? You still wouldn't say, "It's just an algorithm, right?" You would hold the software designers liable for their failure to test their work adequately. Clearly, the reason why you would react the same way in the good-faith scenario as in the bad-faith one is the effect of the poor decision, rather than whether the intent was malicious or less careless.

Algorithms that are as autonomous as self-driving cars, and perhaps less transparent, control your reputation. Unlike with self-driving cars, no one is talking about liability for what happens when they turn your reputation into a pile of burning wreckage.

Algorithms are also incredibly flexible and changeable. Changing code requires people to think and to have discussions with each other, but it doesn't require much attention to the laws of physics and other than paying humans for their time, it has little cost. Exploiting the majority's lack of familiarity with code in order to act as if having to modify software is a huge burden is a good way to avoid work, but a bad way to tend the garden of knowledge.

Plausible Deniability

Designers and implementors of information retrieval algorithms, then, enjoy a certain degree of plausible deniability that designers of algorithms to control self-driving cars (or robots or trains or medical devices) do not.

During the AmazonFail incident in which an (apparent) bug in Amazon's search software caused books on GLBT-related topics to be miscategorized as "adult" and hidden from searches, defenders of Amazon cried "It's just an algorithm." The algorithm didn't hate queer people, they said. It wasn't out to get you. It was just a computer doing what it had programmed to do. You can't hold a computer responsible.

"It's just an algorithm" is the natural successor to the magical intent theory of communication. Since your intent cannot be known to someone else (unless you tell them -- but then, you could lie about it), citing your good intent is often an effective way to dodge responsibility for bad actions. Delegating actions to algorithms takes the person out of the picture altogether: if people with power delegate all of their actions to inanimate objects, which lack intentionality, then no one (no one who has power, anyway) has to be responsible for anything.

"It's just an algorithm" is also a shaming mechanism, because it implies that the complainer is naïve enough to think that computers are conscious. But nobody thinks algorithms can be malicious. So saying, "it's just an algorithm, it doesn't mean you harm" is a response to something nobody said. Rather, when we complain about the outcomes of algorithms, we complain about a choice that was made by not making a choice. In the context of this article, it's the choice to not design systems with an eye towards their potential use for harassment and defamation and possible ways to mitigate those risks. People make this decision all the time, over and over, including for systems being designed today -- when there's enough past experience that everybody ought to know better.

Plausible deniability matters because it provides the moral escape hatch from responsibility for defamation campaigns, on the part of people who own search engines and social media sites. (There's also a legal escape hatch from responsibility, at least in the US: CDA Section 230, which shields every "provider or user of an interactive computer service" from liability for "any information provided by another information content provider.") Plausible deniability is the escape hatch, and advertising is the economic incentive to use that escape hatch. Combined with algorithm opacity, they create a powerful set of incentives for online service providers to profit from defamation campaigns. Anything that attracts attention to a Web site (and, therefore, to the advertisements on it) is worth boosting. Since there are no penalties for boosting harmful, false information, search and recommendation algorithms are amplifiers of false information by design -- there was never any reason to design them not to elevate false but provocative content.

Transparency

I've shown that information retrieval algorithms tend to be bad at limiting the spread of false information because doing the work to curb defamation can't be easily monetized, and because people have low expectations for software and don't hold its creators responsible for their actions. A third reason is that the lack of visibility of the internals of large systems has a chilling effect on public criticism of them.

Plausible deniability and algorithmic opacity go hand in hand. In "Why Algorithm Transparency is Vital to the Future of Thinking", Rachel Shadoan explains in detail what it means for algorithms to be transparent or opaque. The information retrieval algorithms I've been talking about are opaque. Indeed, we're so used to centralized control of search engines and databases that it's hard for them to imagine them being otherwise.

"In the current internet ecosystem, we–the users–are not customers. We are product, packaged and sold to advertisers for the benefit of shareholders. This, in combination with the opacity of the algorithms that facilitate these services, creates an incentive structure where our ability to access information can easily fall prey to a company’s desire for profit."
-- Rachel Shadoan
In an interview, Chelsea Manning commented on this problem as well:
"Algorithms are used to try and find connections among the incomprehensible 'big data' pools that we now gather regularly. Like a scalpel, they're supposed to slice through the data and surgically extract an answer or a prediction to a very narrow question of our choosing—such as which neighborhood to put more police resources into, where terrorists are likely to be hiding, or which potential loan recipients are most likely to default. But—and we often forget this—these algorithms are limited to determining the likelihood or chance based on a correlation, and are not a foregone conclusion. They are also based on the biases created by the algorithm's developer....

These algorithms are even more dangerous when they happen to be proprietary 'black boxes.' This means they cannot be examined by the public. Flaws in algorithms, concerning criminal justice, voting, or military and intelligence, can drastically affect huge populations in our society. Yet, since they are not made open to the public, we often have no idea whether or not they are behaving fairly, and not creating unintended consequences—let alone deliberate and malicious consequences."
-- Chelsea Manning, BoingBoing interview by Cory Doctorow

Opacity results from the ownership of search technology by a few private companies, and their desire not to share their intellectual property. If users were the customers of companies like Google, there would be more of an incentive to design algorithms that use heuristics to detect false information that damages people's credibility. Because advertisers are the customers, and because defamation generally doesn't affect advertisers negatively (unless the advertiser itself is being defamed), there is no economic incentive to do this work. And because people don't understand how algorithms work, and couldn't understand any of the search engines they used even if they wanted to (since the code is closed-source), it's much easier for them to accept the spread of false information as an inevitable consequence of technological progress.

Manning's comments, especially, show why the three problems of economic incentives, plausible deniability, and opacity are interconnected. Economics give Internet companies a reason to distribute false information. Plausible deniability means that the people who own those companies can dodge any blame or shame by assigning fault to the algorithms. And opacity means nobody can ask for the people who design and implement the algorithms to do better, because you can't critique the algorithm if you can't see the source code in the first place.

It doesn't have to be this way. In part 4, I'll suggest a few possibilities for making the Internet a more trustworthy, accountable, and humane medium.

To be continued.


Do you like this post? Support me on Patreon and help me write more like it.

tim: "System Status: Degraded" (degraded)
This post is the second in a 4-part series. The first part was "Defame and Blame".

Phone Books and Megaphones

Think back to 1986. Imagine if somebody told you: "In 30 years, a public directory that's more accessible and ubiquitous than the phone book is now will be available to almost everybody at all times. This directory won't just contain your contact information, but also, a page anyone can write on, like a middle-school slam book but meaner. Whenever anybody writes on it, everybody else will be able to see what they wrote." I don't thin you would have believed it, or if you found it plausible, you probably wouldn't have found this state of affairs acceptable. Yet in 2016, that's how things are. Search engine results have an enormous effect on what people believe to be true, and anybody with enough time on their hands can manipulate search results.

Antisocial Network Effects

When you search for my name on your favorite search engine, you'll find some results that I wish weren't closely linked to my name. People who I'd prefer not to think about have written blog posts mentioning my name, and those articles are among the results that most search engines will retrieve if you're looking for texts that mention me. But that pales in comparison with the experiences of many women A few years ago, Skud wrote:

"Have you ever had to show your male colleagues a webpage that calls you a fat dyke slut? I don’t recommend it."

Imagine going a step further: have you ever had to apply for jobs knowing that if your potential manager searches for your name online, one of the first hits will be a page calling you a fat dyke slut? In 2016, it's pretty easy for anybody who wants to to make that happen to somebody else, as long as the target isn't unusually wealthy or connected. Not every potential manager is going to judge someone negatively just because someone called that person a fat dyke slut on the Internet, and in fact, some might judge them positively. But that's not the point -- the point is if you end up in the sights of a distributed harassment campaign, then one of the first things your potential employers will know about you, possibly for the rest of your life, might be that somebody called you a fat dyke slut. I think most of us, if we had the choice, wouldn't choose that outcome.

Suppose the accusation isn't merely a string of generic insults, but something more tangible: suppose someone decides to accuse you of having achieved your professional position through "sleeping your way to the top," rather than merit. This is a very effective attack on a woman's credibility and competence, because patriarchy primes us to be suspicious of women's achievements anyway. It doesn't take much to tip people, even those who don't consciously hold biases against women, into believing these attacks, because we hold unconscious biases against women that are much stronger than anyone's conscious bias. It doesn't matter if the accusation is demonstrably false -- so long as somebody is able to say it enough times, the combination of network effects and unconscious bias will do the rest of the work and will give the rumor a life of its own.

Not every reputation system has to work the way that search engines do. On eBay, you can only leave feedback for somebody else if you've sold them something or bought something from them. In the 17 years since I started using eBay, that system has been very effective. Once somebody accumulates social capital in the form of positive feedback, they generally don't squander that capital. The system works because having a good reputation on eBay has value, in the financial sense. If you lose your reputation (by ripping somebody off), it takes time to regain it.

On the broader Internet, you can use a disposable identity to generate content. Unlike on eBay, there is no particular reason to use a consistent identity in order to build up a good track record as a seller. If your goal is to build a #personal #brand, then you certainly have a reason to use the same name everywhere, but if your goal is to destroy someone else's, you don't need to do that. The ready availability of disposable identities ("sockpuppets") means that defaming somebody is a low-risk activity even if your accusations can be demonstrated false, because by the time somebody figures out you made your shit up, you've moved on to using a new name that isn't sullied by a track record of dishonesty. So there's an asymmetry here: you can create as many identities as you want, for no cost, to destroy someone else's good name, but having a job and functioning in the world makes it difficult to change identities constantly.

The Megaphone

For most of the 20th century, mass media consisted of newspapers, then radio and then TV. Anybody could start a newspaper, but radio and TV used the broadcast spectrum, which is a public and scarce resource and thus is regulated by governmental agencies. Because the number of radio and TV channels was limited, telecommunications policy was founded on the assumption that some amount of regulation of these channels' use was necessary and did not pose an intrinsic threat to free speech. The right to use various parts of the broadcast spectrum was auctioned off to various private companies, but this was a limited-scope right that could be revoked if those companies acted in a way that blatantly contravened the public interest. A consistent pattern of deception would have been one thing that went against the public interest. As far as I know, no radio or TV broadcaster ever embarked upon a deliberate campaign of defaming multiple people, because the rewards of such an activity wouldn't offset the financial losses that would be inevitably incurred when the lies were exposed.

(I'll use "the megaphone" as a shorthand for media that are capable of reaching a lot of people: formerly, radio and broadcast TV; then cable TV; and currently, the Internet. Not just "the Internet", though, but rather: Internet credibility. Access to the credible Internet (the content that search engine relevance algorithms determine should be centered in responses to queries) is gatekept by algorithms; access to old media was gatekept by people.)

At least until the advent of cable TV, then, the broader the reach of a given communication channel, the more closely access to that channel was monitored and regulated. It's not that this system always worked perfectly, because it didn't, just that there was more or less consensus that it was correct for the public to have oversight with respect to who could be entrusted with access to the megaphone.

Now that access to the Internet is widespread, the megaphone is no longer a scarce resource. In a lot of ways, that's a good thing. It has allowed people to speak truth to power and made it easier for people in marginalized groups to find each other. But it also means that it's easy to start a hate campaign based on falsehoods without incurring any personal risk.

I'm not arguing against anonymity here. Clearly, at least some people have total freedom to act in bad faith while using the names they're usually known by: Milo Yiannopoulos and Andrew Breitbart are obvious examples. If use of real names deters harassment, why are they two of the best-known names in harassment?

Algorithm as Excuse

Zoë Quinn pointed out on Twitter that she can no longer share content with her friends, even if she limits access to it, because her name is irrevocably linked to the harassment campaign that her ex-boyfriend started in order to defame her in 2014, otherwise known as GamerGate. If she uses YouTube to share videos, its recommendation engine will suggest to her friends that they watch "related" videos that -- at best -- attack her for her gender and participation in the game development community. There is no individual who works for Google (YouTube's parent company) who made an explicit decision to link Quinn's name with these attacks. Nonetheless, a pattern in YouTube's recommendations emerged because of a concerted effort by a small group of dedicated individuals to pollute the noosphere in order to harm Quinn. If you find this outcome unacceptable, and I do, we have to consider the chain of events that led to it and ask which links in the chain could be changed so this doesn't happen to someone else in the future.

There is a common line of response to this kind of problem: "You can't get mad at algorithms. They're objective and unbiased." Often, the implication is that the person complaining about the problem is expecting computers to be able to behave sentiently. But that's not the point. When we critique an algorithm's outcome, we're asking the people who design and maintain the algorithms to do better, whether the outcome is that it uses too much memory or that it causes a woman to be re-victimized every time someone queries a search engine for her name. Everything an algorithm does is because of a design choice that one or several humans made. And software exists to serve humans, not the other way around: when it doesn't do what we want, we can demand change, rather than changing ourselves so that software developers don't have to do their jobs. By saying "it's just an algorithm", we can avoid taking responsibility for our values as long as we encode those values as a set of rules executable by machine. We can automate disavowal.

How did we get here -- to a place where anyone can grab the megaphone, anyone can scribble in the phone book, and people who benefit from the dissemination of this false information are immune from any of the risks? I'll try to answer that in part 3.

To be continued.


Do you like this post? Support me on Patreon and help me write more like it.

tim: "System Status: Degraded" (degraded)
This post is the first in a 4-part series. Part 2 is "Phone Books and Megaphones."

Defame and Blame

The Internet makes it cheap to damage someone else's reputation without risking your own. The asymmetry between the low cost of spreading false information and the high cost to victims of such attacks is an economic and architectural failure, an unintended consequence of a communications infrastructure that's nominally decentralized while actually largely centralized under the control of a few advertising-based companies.

We do not hear a lot of discussion of harassment and defamation as either an economic failure or an engineering failure. Instead, we hear that online harassment is sad but inevitable, or that it happens "because people suck." As Anil Dash wrote, "don't read the comments" normalizes the expectation that behavior online will sink to the lowest common denominator and stay there. People seem to take a similar approach to outright harassment as they do to comments expressing bad opinions.

The cases I'm talking about, like the defamation of Kathy Sierra or the Gamergate coordinated harassment campaign, are effective because of their use of proxy recruitment. Effective propagandists who have social capital have learned how to recruit participants for their harassment campaigns: by coming up with a good lie and relying on network effects to do the rest of the work. Spreading false information about a woman -- particularly a woman who is especially vulnerable because of intersecting marginalized identities -- is easy because it confirms sexist biases (conscious or not-so-conscious) that we all have. Since most of us have internalized the belief that women are less competent, convincing people that a woman slept her way to the top doesn't take much effort.

"Don't read the comments" isn't good advice for people who are merely being pestered. (And anyway, we might question the use of the word "merely", since having to manage a flood of unwanted comments in order to receive desired communication tends to have a certain isolating effect on a person.) But it's especially bad advice for people who are being defamed. What good does it do to ignore the people spreading lies about you when ignoring them won't change what set of web pages a search engine returns as the top ten hits for your name? When you advise targets of harassment to "ignore it" or to "not feed the trolls", you shift responsibility onto victims and away from the people who benefit from the spread of false information (and I don't just mean the people who initiate harassment campaigns). In short, you blame victims.

Algorithms, Advertising, and Accountability

We neither talk much about the democratization of defamation, nor know how to mitigate it. It happens for a reason. Online harassment and defamation campaigns are an inevitable consequence of a telecommunications infrastructure that is dominated by for-profit advertising-supported businesses governed by algorithms that function autonomously. However, neither the autonomy of algorithms nor the ad-supported business model that most social media and search engine companies share is inevitable. Both are a result of decisions made by people, and both can be changed if people have the will to do so. The combination of ads and unsupervised algorithms currently defines the political economy of telecommunications, but it's by no means inevitable, natural, or necessary.

Broadcast television is, or was, advertising-supported, but it didn't lend itself to harassment and defamation nearly as easily, since a relatively small group of people had access to the megaphone. Likewise, online services don't have to encourage bad-faith speech, and discouraging it doesn't necessarily require a huge amount of labor: for example, eBay functions with minimal human oversight by limiting its feedback function to comments that go with an actual financial transaction. However, online search engines and recommendation systems typically use an advertising-based business model where customers pay for services with their attention rather than with money, and typically function with neither human supervision nor any design effort paid to discouraging defamation. Because of these two properties, it's relatively easy for anyone who's sufficiently determined to take control of what shows up when somebody looks up your name in the global distributed directory known as your favorite popular search engine -- that is, as long as you can't afford the public relations apparatus it takes to guard against such attacks. Harassment campaigns succeed to the extent that they exploit the ad-based business model and the absence of editorial oversight that characterize new media.

What This Article is Not About

Three topics I'm not addressing in this essay are:
  • Holding public figures accountable. When people talk about wanting to limit access to the megaphone that search engines make freely available to sufficiently persistent individuals, a common response is, "Are you saying you want to limit people's ability to hold powerful people accountable?" I think it's important for private citizens to be able to use the Internet to expose wrongdoing by powerful people, such as elected officials. I don't agree with the assumption behind this question: the assumption that private citizens ought to be exposed to the same level of public scrutiny as public figures are.
  • "Public shaming." What some people call "public shaming" refers to criticism of a person for a thing that person actually said. When Jon Ronson wrote about Justine Sacco getting "publicly shamed", he didn't mean that people falsely accused her of using her public platform to make a joke at the expense of people with AIDS. He and Sacco's critics agree that she did freely choose to make that joke. I'm talking about something different: when people use technology to construct a false narrative that portrays their adversary as having said something the adversary didn't say. This is not an article about "public shaming".

    The difference between defamation and shaming is defamation is defined by the behavior of the subject rather than the emotional reaction of the object; the latter sort of rests on this idea that it's wrong to make certain people feel certain ways, and I don't agree with that idea.

  • Censorship. I'm not advocating censorship when I ask how we entered a technological regime in which quality control for information retrieval algorithms is difficult or impossible without suppressing legitimate speech. I'm pointing out that we've designed ourselves into a system where no fine distinctions are possible, and the rapid dissemination of lies can't be curtailed without suppressing truth. As Sarah Jeong points out in her book The Internet of Garbage, the belief that discouraging harassment means encouraging censorship is founded on the false assumption that addressing harassment online means suppressing or deleting content. In fact, search engines already filter, prioritize, and otherwise implement heuristics about information quality. Some of the same technologies could be used to -- in Jeong's words -- dampen harassment and protect the targets of harassment. If you object to that, then surely you also object to the decisions encoded in information retrieval algorithms about what documents are most relevant to a query.

What's Next

So far, I've argued that social network infrastructure has two design flaws which serve to amplify rather than dampening harassment:
  • Lack of editorial oversight means that the barrier to entry to publishing has changed from being a journalist (while journalists have never been perfect, at least they're members of a profession with standards and ethics) to being someone with a little charisma and a lot of free time.
  • Advertising-supported business models means that a mildly charismatic, very bored antihero can find many bright people eager to help disseminate their lies because lies are provocative and provocative stories get clicks.

In the next three installments, I'll elaborate on how we got into this situation and what we could do to change it.


Do you like this post? Support me on Patreon and help me write more like it.

tim: "Bees may escape" (bees)
"I wish the war was on,
I know this sounds strange to you.
I miss the war-time life,
anything could happen then:
around a corner, behind a door."
-- John Vanderslice, "I Miss the War"


This is the long-form version of a series of tweets that I wrote about resistance to emotional safety. Everything here has been said before by people other than me, but I'm presenting it in the hopes that it may be useful in this form, without attempting to cite sources exhaustively. I probably wouldn't have thought to write it down, though, had I not read this series of tweets from [twitter.com profile] inthesedeserts.

CW: discussion of trauma, emotional abuse, gaslighting, self-harm

There's a thing that can happen when you've spent a lot of time at war. For some of us, it's hard to feel comfortable in safe situations. It's paradoxical, right? I've done my share of writing about codes of conduct and about content warnings (or trigger warnings). I've argued that creating an atmosphere of emotional safety is important, especially for trauma survivors. Because people in marginalized groups are disproportionately likely to be trauma survivors, diversity and inclusion are inextricable from treating survivors like first-class citizens. If safety is so important to me, why would I say that safety also often makes me feel uncomfortable?

It may not make sense, but it's true: safety is both something I seek out and something I often avoid when it's offered to me. In the abstract, it's desirable. But when it starts to seem like a real possibility, it can be super threatening.
Read more... )
tim: "System Status: Degraded" (degraded)
For reasons, I find myself using a Linux machine -- specifically one running Ubuntu Trusty (for all intents and purposes) -- and for other reasons, I wanted my screensaver to display random photos from a folder (specifically a folder of images from posts I liked on Tumblr, but that part is separate).

My computer was set up to use the Cinnamon desktop manager and I didn't want to change that especially (something that I learned when I unintentionally uninstalled it while trying to change the screensaver). In Cinnamon you change the screensaver by going to the system settings and then selecting the Screensaver icon, which presents you with a list of possible screensavers, most of which are from xscreensaver. One option is the GLSlideshow program, which did exactly what I wanted: displays photos from a folder you select.

Only problem is, the System Settings GUI lets you choose GLSlideshow as your screensaver but doesn't have any configuration options that are screensaver-specific. So there's no way in the GUI to select the folder of pictures you want to use.

An easy way to address this problem would be to set Cinnamon's internal screensaver to never trigger and to install xscreensaver. But I wanted to run my screensaver when I clicked on the Lock Screen menu option. I couldn't figure out a way to reconfigure Cinnamon's menu options, so I resolved to find a solution that didn't require me to do that or to disable or circumvent Cinnamon's screensaver.

After some digging, I discovered that you can configure the folder GLSlideshow uses by creating a ~/.xscreensaver file -- this post answers that part of the question.

After I added that dotfile -- oh, and also deleted my ~/.cache directory, which took another 15 or so minutes to figure out (an alternative I tried first, which worked just as well, I think, is to rename the directory with your photos in it, and edit the .xscreensaver file to reflect the new name) -- I had a screensaver that showed random photos from my chosen directory when I locked my screen, but the photo that it chose would stay the same for a long time; I wanted the photos to alternate faster.

edited to add, 2016-05-13: You may also have to delete your ~/.xscreensaver-getimage.cache file, and/or ~/tmp/.xscreensaver-getimage.cache.

After another 15-20 minutes of digging, I found that GLSlideshow has two command-line options, -pan and -duration, that control how long it displays a single photo for. I still don't quite understand the semantics of these flags, but it suffices to set the pan and duration values to the same integer (5 seconds seems reasonable) to get the behavior I want. This is explained in this post.

Okay, but how do I actually pass those flags? GLSlideshow gets invoked by some widget that's part of Cinnamon that I can't change, and which doesn't expose that configuration in the UI? There is a solution that you get when you search for this problem, and it's wrong, or at least, doesn't work with Cinnamon on Trusty.

The solution is a hack: a single command

gsettings set org.cinnamon.desktop.screensaver xscreensaver-hack "glslideshow -pan 5 -duration 5"

to replace the specific xscreensaver module that the Cinnamon screensaver runs (the "xscreensaver hack") with that same module, suffixed by the flags that you want to pass to it. This works, although perhaps it shouldn't. Now I have a screensaver that displays a random photo from my photos folder and changes the photo every 5 seconds! Yay!

But in the meantime -- false starts, accidentally uninstalling Cinnamon (turns out if you use apt to uninstall the Cinnamon screensaver, it helpfully removes all of Cinnamon), and all -- I spent about 2 hours doing something that would take about 3 minutes on a Mac.

How's 2016, the year of the Linux desktop, treating you?

tim: A person with multicolored hair holding a sign that says "Binaries Are For Computers" with rainbow-colored letters (computers)
I thought I would write down what I do to back up my computer (a laptop running Mac OS X), since it took a surprising amount of time to figure out.

I have a private Github repository containing most of the text files I create that I want backed-up and versioned. This costs me $7/month, which is well worth it to me.

Camera phone pictures, and any documents I want to be able to share and/or access easily from my phone and from computers other than my own, go on Google Drive. I pay $2/month for 100 GB of storage, most of which I'm not using.

I back up most of the files in my home directory on my laptop with rsync to rsync.net. This costs me $5/month for 50 GB of storage. I followed their instructions to do nightly backups (which I schedule for 3 AM when my laptop is usually plugged in and on a network, but I'm not using it) using launchd. Doing the initial backup took me over a month, because of somewhat unreliable Internet access and because of the time it took to sift through and figure out what files I could delete and which ones I didn't need a cloud backup of in order to stay under quota. (Part of that time was the time it took for me to consolidate about 5 different external hard drives containing the past 18 years of backups onto my laptop hard disk.) Since I only have 50 GB, I don't back up my music library onto rsync, figuring that a TimeMachine backup is enough and that in the worst case, I can replace almost all of it from the Internet or CDs.

Once a week I plug in my TimeMachine disk and let it do its work, so I always have a full backup that's no more than a week old in addition to the partial cloud backups that I get from git, Google Drive, and rsync.net. Of course, this doesn't help if my house burns down, but does help if my laptop gets lost or stolen when I'm not at home since I keep the backup disk at home.

Writing this, I'm not sure why I'm paying for both rsync and Google Drive, since I have more storage on Google Drive and am paying less. I wanted something that was easy to automate using the command line, but I haven't actually looked into options for doing automatic backups to Google Drive. On the other hand, it took me so much time to get regularly scheduled rsync backups working that I'm reluctant to put more time into it.

I haven't yet figured out how to back up my phone (Android); there doesn't seem to be a good way to back up everything (including text messages) without rooting my phone. I've been reluctant to put in the time required to root my phone, but it looks like I will have to. Suggestions welcome!
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
"Well, Jesus was a homeless lad
With an unwed mother and an absent dad
And I really don't think he would have gotten that far
If Newt, Pat and Jesse had followed that star
So let's all sing out praises to
That longhaired radical socialist Jew

When Jesus taught the people he
Would never charge a tuition fee
He just took some fishes and some bread
And made up free school lunches instead
So let's all sing out praises to
That long-haired radical socialist Jew

He healed the blind and made them see
He brought the lame folks to their feet
Rich and poor, any time, anywhere
Just pioneering that free health care
So let's all sing out praises to
That longhaired radical socialist Jew

Jesus hung with a low-life crowd
But those working stiffs sure did him proud
Some were murderers, thieves and whores
But at least they didn't do it as legislators
So let's all sing out praises to
That longhaired radical socialist Jew

Jesus lived in troubled times
the religious right was on the rise
Oh what could have saved him from his terrible fate?
Separation of church and state.
So let's all sing out praises to
That longhaired radical socialist Jew

Sometimes I fall into deep despair
When I hear those hypocrites on the air
But every Sunday gives me hope
When pastor, deacon, priest, and pope
Are all singing out their praises to
Some longhaired radical socialist Jew.

They're all singing out their praises to....
Some longhaired radical socialist Jew."

-- Hugh Blumenfeld

tim: "System Status: Degraded" (degraded)
This is the second post in a two-part series. The first part is here.

Shrinking the Social Trusted Computing Base



In a software system, the trusted computing base is that portion of software that hasn't been formally verified as correct. For the purposes of this analogy, it's not important what "formally verified" means, just that there is a way to determine whether something has been verified or not -- often, "verified" means automatically checked by a machine. If you have software that verifies other software, you might ask who verifies the verifier. Ultimately, there's always some piece of code that's at the bottom -- you can't have turtles all the way down. That code has to be reviewed by people to increase the likelihood that it's correct. Of course, people can make mistakes and it's always possible that people will fail to spot errors in it -- but the more people review it carefully, the more confident we can be that it's correct.

Moreover, the smaller the amount of code that has to be verified in this exacting way, the more confidence we can have that the whole system is reliable, even though we can never be totally sure that a system is free of errors. When people interested in software quality talk about making the trusted computing base smaller, this is what they mean. People make mistakes, so it's best to have computers (who don't get bored) do the tedious work of checking for errors, and limit the amount of work that fallible humans have to do.

People who understand the imperative to keep the trusted computing base small nevertheless, sometimes, fail to see that social interactions follow a similar principle. In the absence of a formal code of conduct, when you join a group you have to trust that everybody in that group will respect you and treat you fairly. Codes of conduct don't prevent people from harming you, but they do grant increased assurance that if somebody does, there will be consequences for it, and that if you express your concerns to other people in the group, they will take your concerns seriously. When there is a code of conduct, you still have to trust the people in charge of enforcing it to enforce it fairly and humanely. But if you disagree with their actions, you have a document to point to in order to explain why. In the absence of a code of conduct, you instead have to argue with them about whether somebody was or was not being a dick. Such arguments are subjective and unlikely to generate more light than heat. It saves time and energy to be explicit about what we mean by not being a dick. And that, in turn, minimizes work for people joining the group. They just have to review your code of conduct and determine whether they think you will enforce it, rather than reviewing the character of every single person in the group.

It's clear that nerds don't trust a rule like "don't be a dick" when they think it matters. Open-source or free software project maintainers wouldn't replace the GPL or the BSD license with a text file consisting of the words "Don't be a dick." If "don't be a dick" is a good enough substitute for a code of conduct, why can't we release code under a "be excellent to each other" license? Licenses exist because if someone misuses your software and you want to sue them in order to discourage such behavior in the future, you need a document to show the lawyers to prove that somebody violated a contract. They also exist so that people can write open-source software while feeling confident that their work won't be exploited for purposes they disagree with (producing closed-source software). A "don't be a dick" license wouldn't serve these purposes. And a "don't be a dick" code of conduct doesn't serve the purpose of making people feel safe or comfortable in a particular group.

When do you choose to exercise your freedom to be yourself? When do you choose to exercise your freedom to restrain yourself in order to promote equality for other people? "Don't be a dick" offers no answer to these questions. What guidance does "don't be a dick" give me if I want to make dirty jokes in a group of people I'm not intimate with -- co-workers, perhaps? If I take "don't be a dick" to mean they should trust me that I don't intend to be a dick, then I should go ahead and do it, right? But what if I make somebody uncomfortable? Is it their fault, because they failed to trust me enough to believe that my intent was to have a bit of fun? Or was it my fault, for failing to consider that regardless of my true intent, somebody else might not give me to benefit of the doubt? If, rather than not being a dick, I make a commitment to try as hard as I can to take context into account before speaking, and consider how I sound to other people, I might choose to self-censor. I don't know another way to coexist with other people without constantly violating their boundaries. This requires sensitivity and the ability to treat people as individuals, rather than commitment to a fixed code of behavior whose existence "don't be a dick" implies.

I wrote about the idea of "not censoring yourself" before, and noted how saying everything that comes into your head isn't compatible with respecting other people, in "Self-Censorship". If I censor myself sometimes, in different ways depending on what context I'm in, am I failing to be my entire self? Maybe -- or maybe, as I suggested before, I don't have a single "true self" and who I am is context-dependent. And maybe there's nothing wrong with that.

Part of what politics are about is who gets accorded the benefit of the doubt and who gets denied it. For example, when a woman accuses a man of raping her, there's an overwhelming tendency to disbelieve her, which is often expressed as "giving the man the benefit of the doubt" or considering him "innocent until proven guilty." But there is really no neutral choice: either one believes the woman who says she was raped is telling the truth, or believes that she is lying. You can give the benefit of the doubt to the accused and assume he's innocent, or give the benefit of the doubt to the accuser and assume that she would only make such a serious accusation if it's true. When you encourage people to accord others the "benefit of the doubt", you're encouraging them to exercise unconscious bias, because according some people the benefit of the doubt means withholding it from others. In many situations, it's not possible for everybody to be acting in good faith.

Resisting Doublespeak



Maybe we shouldn't be surprised that in an industry largely built on finding ways to deliver a broader audience to advertisers, which nonetheless bills itself as driven by "innovation" and "making the world a better place", doublespeak is so widespread. And advertising-funded companies are ultimately driven by that -- every thing they do is about delivering more eyeballs to advertisers. If some of the things they do happen to make people's lives better, that's an accident. A company that did otherwise would be breaching their obligations to stockholders or investors.

Likewise, maybe we also shouldn't be surprised that in an industry built on the rhetoric of "rock star" engineers, the baseline assumption is that encouraging everybody to be an individual will result in everybody being able to be their best self. Sometimes, you need choral singers, not rock stars. It might feel good to sing a solo, but often, it's necessary to blend your voice with the rest of the choir. That is, in order to create an environment where it's safe for people to do their best, you need to be attuned to social cues and adjust your behavior to match social norms -- or to consciously act against those norms when it would be better to discard them and build new ones.

Both "be yourself" and "don't be a dick" smack of "there are rules, but we won't tell you what they are." At work, you probably signed an employment agreement. In life, there are consequences if you violate laws. And there are also consequences if you violate norms. "Being yourself" always has limits, and being told to be your entire self tells you nothing about what those limits are. Likewise, "don't be a dick" and its attendant refusal to codify community standards of behavior signifies unwillingness to help newcomers integrate into a community and to help preserve the good things about its culture while also preserving space to be themselves while respecting others.

When you refuse to tell somebody the rules, you're setting them up for failure, because breaking unwritten news is usually punished quietly, through social isolation or rejection. The absence of rules is effectively a threat that even if you want to do your best, you could be excluded at any time for violating a norm you didn't know existed. (Also see "The Tyranny of Structurelessness" by Jo Freeman.)

So instead of instructing people to "bring your whole self to work", we could say what is welcome in the office -- ideas, collaboration, respect -- and what should be left at the door -- contempt for other people's chosen programming languages, text editors, belief systems, or dietary habits; exclusive behavior; and marginalizing language. Instead of telling people not to be a dick, we could work together to write down what our communities expect from people. And instead of preaching about changing the world, we could admit that when we work for corporations, we're obligated above all to maximize value for the people who own them.

Saying things you hope are true doesn't make them true. Insisting that your environment is safe for everybody, or that everybody already knows how to not be a dick, doesn't create safety or teach respect, anymore than claiming to be a "10x engineer" makes you one. Inclusion requires showing, not telling.
Do you like this post? Support me on Patreon and help me write more like it.

*confetti*

Dec. 16th, 2015 10:21 am
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
I've reached my goal of getting 40 people to donate to the National Network of Abortion Funds for my 35th birthday! I've been doing a birthday fundraiser almost every year since (I think) 2009, and this one has been the most successful ever.

Thanks to everybody who gave for supporting reproductive choice!
tim: "System Status: Degraded" (degraded)
This is the first post in a two-part series.

Creative Commons-licensed image by David Swayze

"Be your entire self at work." You might hear these words during orientation at a new job, if you work for the kind of company that prides itself on its open, informal culture -- a software company in Silicon Valley, perhaps. When you hear that everybody is free to be their entire self at your workplace, do you hear a promise or a threat?

"You're allowed to bring your whole self to work" should be true by default; in an ideal world, it wouldn't need to be said. Repressing essential aspects of your personality is an energy-sapping distraction. At the same time, it's such a broad statement that it denotes nothing -- so we have to ask what it connotes. When your boss (or your boss's boss's boss, or someone acting on that person's behalf) grants you permission to bring your whole self to work, what's the subtext?

Here's another thing you might hear tech people say that's so vague as to be tautological: "We don't need a code of conduct, because all we need to do is be excellent to each other or say 'don't be a dick.'" The tautological part is "don't be a dick", which is an anti-pattern when used as a substitute for clear community expectations. Nobody could reasonably argue against the value of being excellent to other people or in favor of being a dick. As with "be yourself", the vacuity of "don't be a dick" suggests the need to ask what it really means when someone says the only rule we need is "don't be a dick" (or its relative "be excellent to each other".)

"Be yourself" and "don't be a dick" share at least three problems.

  • Unequal distribution of risk: If you're trans, neuroatypical, queer, or poly, you're probably familiar with the risks of disclosing important parts of your life. In the absence of evidence that it's actually safe to be yourself at work, telling people "be yourself" is a request to trust everyone to respond appropriately to you being yourself. That's a lot to ask somebody who is brand-new to a group. Is there a way to show newcomers that it's safe to be who you are here, rather than telling them?
  • Unwritten expectations: "Don't be a dick", when accompanied by unwillingness to codify your community's norms (such as in a written document like a code of conduct), is a request to trust everyone to not be a dick. When norms are codified, you don't have to trust everyone to not be a dick: the document doesn't prevent anyone from being a dick, but it provides a basis for increased trust that if someone is a dick, they will be discouraged from future dickishness and, in the case of repeat offenders, potentially be excluded from interaction.
  • Unhelpful balancing of different goals: Both "be yourself" and "don't be a dick" (the latter with its implication that you're free to do whatever you want as long as you don't think you're being a dick about it) reflect on apparently arbitrary weighting of personal freedom as more important than fairness.


Different people perceive a statement like "be yourself" differently -- and the same person might perceive it differently depending on who's saying it -- because different people have different levels of trust in each other. Trust is political: marginalized people manage risk in different ways than people in dominant groups, and the more marginalized groups you're in, the subtler it becomes. Likewise, written community norms benefit newcomers and marginalized people, while unwritten norms (such as the ones implied by "don't be a dick" serve to maintain in-group homogeneity. If people who say "don't be a dick" want to keep their communities uniform, it would behoove them to at least say so.

The assumption that mutual trust already exists may lead you to conclude that we'll be equal when everyone gets to act exactly the way they want. But marginalized people have legitimate reasons not to trust people in groups that dominate them -- namely, past experiences. Trust has to be earned; one way to establish it is by being explicit about expectations.

In computer systems, sometimes we use the terms "pulling a thread" or "thread pulling" for the process of finding the root cause of a problem in a complex system, which is often hidden beneath many layers of abstraction. At the same time, sometimes what seems to be a minor problem as observed from the outside can signify deeply rooted flaws in a system, the way that pulling on a loose thread in a knitted garment can unravel the whole thing. In this essay, I want to pull a cultural thread and examine the roots of the assumptions that underlie statements like "just be yourself." Just as problems in large, distributed computer systems often have causes that aren't obvious, the same is true for social problems. While you don't have to agree with my analysis, I hope you agree with me that it's worth asking questions about why people say things that appear to be trivial or obviously true at first glance.

The Risks of Disclosure



Personal disclosure can be risky, and those risks are distributed unevenly through the population. Here are some examples of what can happen when you do take the risk of being your entire self at work -- or anywhere, for that matter, but any of these reactions are more concerning when they happen in the place where you earn your livelihood, and when they're coming from people who can stop you from making a living.


  • Mentioning your membership in a sexual minority group can make other people uncomfortable in the extreme. You could reasonably debate whether that ought to be true when it comes to talking about kinks, but even mentioning that you're gay or trans can become cause for sexual harassment accusations. You say your company isn't like that? Will someone who's experienced this at a previous employer believe you?
  • If you talk about having PTSD, or ADD/ADHD, or being on the autism spectrum, you may be told "don't label yourself, just live!" To not label yourself -- to not seek solidarity and common ground with others who share your life experiences -- is tantamount to not organizing, not being political, not taking power. Maybe you don't want to be told this for the nth time. (Of course, you also risk retaliation by managers or co-workers who may not be thrilled about having disabled or neuroatypical employees or co-workers.)
  • If you disclose that you are trans, you are likely to be misgendered in the future (or worse).
  • If you mention a chronic illness, people are likely to provide unsolicited and unhelpful advice; dealing with their reactions when you say so can be draining, and smiling and nodding can be draining too.


More broadly, disclosing mental health or sexual/gender minority status (as well as, no doubt, many other identities) means managing other people's discomfort and fielding intrusive questions. Maybe it's easier to not disclose those issues, even if it means letting people think you're someone you aren't. And in some cases, disclosure might just not be worth the discomfort it causes to others. Am I being less real when I keep certain aspects of myself private in the interest of social harmony? Does thinking about how others will feel about what I say make me less authentic? Does being real amount to narcissism?

There are always boundaries to what we reveal about ourselves in non-intimate settings: it's why we wear clothes. Telling people to be authentic obscures where those boundaries are rather than clarifying them. And what does "be who you are" or "be your entire self" mean, anyway? Every person I know gets to see a different side of me. Which one is the real me? Is the person I am when I'm with my closest friend more like the real me than who I am at work, or is it just different? The idea that everybody has a single true self rather than multiple selves of equal status is just a way in which some people formulate their identities, not a universal truth.

I think part of the origin of "be your entire self" rhetoric lies in the imperative -- popular among some cis gay and lesbian people and their allies -- to implore all queer people to come out of the closet. Being open about your identity, they say, is essential to helping queer people gain acceptance. There are a lot of problems with coming-out as a categorical imperative. One of them is that closets are safe, and it's easy to sneer at others' desire for safety when you yourself are safe and secure.

I think "be your entire self" comes from the same place as "everyone should come out." Both statements can be made with good intentions, but also, necessarily, naïve ones.

Unwritten Expectations Impede Trust



"Be yourself" may seem harmless, if trite, but I hope I've shown that it relies on assumptions that are problematic at best. It can also conceal failure to make social expectations clear. Unwritten expectations often serve to exclude people socially, since fear of violating rules you don't know can be a reason to avoid entering an unfamiliar space. When that fear means not applying for a job, or not participating in a community of practice that would benefit from your participation and help you grow as a professional, it has concrete consequences in marginalized people's lives.


"As a reviewer of code, please strive to keep things civil and focused on the technical issues involved. We are all humans, and frustrations can be high on both sides of the process. Try to keep in mind the immortal words of Bill and Ted, "Be excellent to each other."
-- Linux kernel "Code of Conflict"


When you refuse to say what your community's standards for acceptable behavior are, you're not saying that your community has no standards. You're just saying you're not willing to say what they are. When Linus Torvalds says "be excellent to each other", what do people hear? If you're someone socially similar to him, maybe you hear that the kernel community is a safe place for you. If you're someone who has been historically excluded from tech culture, you might hear something different. You might ask yourself: "Why should I trust you to be excellent to me? What's more, how do I know I can trust everyone in this group to be excellent to me, much less trust that everyone's definition of 'excellent' is compatible with my well-being?"

When you say the only rule is "don't be a dick", or implore people to be themselves, or tell people they don't need to put on a suit to work at your company, what you're really saying is "trust me!" Trust everyone in the group not to be a dick, in the first case. Trust everyone not to judge or belittle you, in the second. Trust them to judge you for who you are and not on what you're wearing, in the third case. When somebody says "trust me!" and your gut feeling is that you shouldn't trust them, that's already a sign you don't belong. It's a grunch. It's a reminder that you don't experience the automatic trust that this person or group seems to expect. Does everybody else experience it? Are you the only distrustful one? Is there something wrong with you, or is your mistrust warranted based on your past experiences? Asking yourself those questions takes up time.

Freedom and Equality



Sometimes, freedoms conflict, which is why freedom is just one value that has to be balanced with others, not an absolute. If your freedom of expression prevents me from being at the table, or making a living, or even beginning to realize my potential at all, then your freedom limits mine and the solution involves considering both of our interests, not concluding in the name of "freedom" that you should be able to exclude me. Inequality isn't compatible with freedom, and boosting your "freedom" at my expense is inherently unfair and unequal.

The bridge between freedom and equality is trust. People who trust each other can be who they are while trusting other people to call them out on it if being who they are infringes on other people's well-being. Likewise, people who trust each other will give each other the benefit of the doubt and assume good faith when conflicts happen. But in the absence of trust, freedom won't naturally lead to equality, because marginalized people will (rightly) assume that the power dynamics they're used to are still operating, while less-marginalized people will assume that they are free to keep recreating those power dynamics.


In tech, there's a certain kind of person who often champions "freedom" at the expense of others' safety.

"...if you want me to ‘act professional,’ I can tell you that I’m not interested. I’m sitting in my home office wearing a bathrobe. The same way I’m not going to start wearing ties, I’m *also* not going to buy into the fake politeness, the lying, the office politics and backstabbing, the passive aggressiveness, and the buzzwords." -- Linus Torvalds, as quoted by Elise Ackerman


There's a lot to unpack in this quote; in it, Torvalds exemplifies a tendency among programmers, especially privileged male programmers, to use having to wear a suit or tie as a proxy for the forms of oppression they fear if their (e.g.) open-source project adopts norms about respect which they associate with big companies that produce proprietary software. Torvalds and his ilk might express contempt for the notion of a "safe space", but they actually care a lot about safe spaces: they want spaces in which it's safe for them to wear their bathrobes and swear. They're afraid that creating a space that's safe for every open-source contributor, not just white cis men in bathrobes, might threaten their own safety.

If having to wear a suit is the worst limitation on your life you can imagine, maybe it's time to take a step back and consider the experiences of people with less privilege. In fact, standardized expectations about dress can be helpful, at least when they aren't based on binary gender. Replacing "everyone has to wear a suit" with "only people in T-shirts and jeans will be taken seriously" doesn't fundamentally reduce the degree to which people get judged on their appearance rather than their abilities -- it just replaces one limiting dress code with another. And maybe suits aren't really that limiting. Uniforms can have an equalizing function. I'm not a particular fan of wearing suits all the time myself, but when abolishing suits doesn't result in the emergence of another sartorial hegemony, it potentially burdens people with decisions that they wouldn't have to make if there weren't clear norms and expectations for dress. As always, there are going to be expectations. I'm not aware of many companies where going to work naked is encouraged. So if suits aren't encouraged, a whole host of decisions have to happen, and guesses have to be made, about what people will think of you based on your clothing. It's a lot of cognitive load. Maybe sometimes, clear expectations about how to dress help people be equal! Who loses when Torvalds and others like him win the ability to work in their bathrobes? Who loses when Torvalds, apparently unable to conceive of sincere politeness and genuine respect, wins the right not to feign regard for others?

"If telling people to be themselves creates unsafe spaces, how can I let people know my space is safe?", you might ask. I'll try to answer that in part 2.
Do you like this post? Support me on Patreon and help me write more like it.
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Apropos of nothing, bunnies!

(from [tumblr.com profile] awesome-picz on Tumblr)

I like bunnies. I also like expanding access to abortion. If you do too, you should donate to the National Network of Abortion Funds. For my 35th birthday in 3 days, I'm trying to get 40 people to donate -- so far, 32 awesome people have given! Here are their names, and if you comment saying that you gave, I'll add your name too. (Or let me know privately so I can update my tally without using your name, if you would rather be anonymous.)

We can do this!!
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Donate to the National Network of Abortion Funds or this puppy will be sad! (source)

Well, probably not. But I will be. With 3 days left to my 35th birthday, I'm still trying to get 12 more people to make a donation to help people get abortions. It's just that simple. Here's how to give, and once you do, let me know so I can update my tally!
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Here's a kitten! (image by Instagram user veggiedayz)

Now that I've got your attention, why not donate to the National Network of Abortion Funds? You'll be helping somebody get the abortion they need. You'll be annoying a forced-birther. And if you do it within the next 4 days, you'll be helping wish me a happy 35th birthday. If you let me know that you gave, I'll be one step closer to not having to post nag messages multiple times a day ;)
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
This bucket of puppies really wishes you would donate to the National Network of Abortion Funds. Well, okay, that's a lie, but what's true is that I do! (Photo from [tumblr.com profile] babyanimalgifs.)

I'll be 35 in 6 days, and all I want for my birthday is for you to donate to the National Network of Abortion Funds. So far, 21 people have donated, bringing me more than half the way towards reaching my goal of donations from 40 people. Please let me know if you give so I can continue tracking my progress! And thanks to the wonderful people who have donated so far (follow the previous link to see their names)

Profile

tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Tim Chevalier

July 2016

S M T W T F S
     12
3 45678 9
10 1112 13141516
17 18192021 2223
2425 2627282930
31      

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags