tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
2030-12-18 02:14 pm

How to post comments if you don't have a Dreamwidth account

I request that you read my comment policy before commenting, especially if you don't know me offline.

If you have a LiveJournal account and want to leave comments on my journal, you can do that without giving Dreamwidth a password or any personal information except an email address. You can follow these instructions (with slight modifications) if you have an account on a site that provides OpenID credentials, too. (For example, any Google or Google+ account should work this way.) Here's how:

  1. Go to the main Dreamwidth page
  2. Follow the "Log In with OpenID" link
  3. In the "Your OpenID URL" box, put yourusername.livejournal.com. For example, if I wanted to log in with my LiveJournal account, I would type "catamorphism.livejournal.com".
  4. Click Login.
  5. Click "Yes, just this time" or "Yes, always" when LiveJournal asks if you want to validate your identity.
  6. The first time you log in, you'll see a message "Please set and confirm your email address". Click the "set" link and follow the instructions.
  7. You'll get an email from Dreamwidth containing a link. Follow the link to confirm your email address.
  8. Follow the instructions. You should now be able to leave comments.

Edited to add as of February 26, 2013: There have been intermittent problems with using OpenID to log in to Dreamwidth. The most reliable way to comment is to create a Dreamwidth account, which is free.
tim: A bright orange fish. (fish)
2016-06-27 07:52 pm
Entry tags:

[Linkspam] Monday, June 27

First of all, some shameless bragging: my friends Jamie and Marley were on the front page of Saturday's San Francisco Chronicle, making out at Trans March!

I was also proud to witness Mayor Ed Lee and Supervisor Scott Wiener getting booed off the stage at Trans March. You can't support trans people while supporting police and criminalizing homelessness.

Unrelatedly, here's an adult capybara booping a baby capybara.

Orlando shooting: It’s different now, but Muslims have a long history of accepting homosexuality, by Shoaib Daniyal for scroll.in (2016-06-27). A cool trick that Western white supremacists pull is to attribute blame for homophobia exported by Western countries onto the Asian and African countries into which they exported it. You don't have to fall for it.

"No more rock stars: how to stop abuse in tech communities", by Valerie Aurora, Mary Gardiner, and Leigh Honeywell (2016-06-21). I'm very proud to call the authors of this article my friends; they offer a comprehensive analysis of tech communities' handling of abuse and harassment, as well as many actionable suggestions.

"Patching exploitable communities", by Tom Lowenthal (2016-06-21). A great, succinct summary of the aforementioned article.

KatieConf - if you can write an entire conference lineup consisting only of women named various forms of "Katherine"/"Catherine"/"Katie", then what's your excuse for not being able to find women speakers?

"Who Gets To Be The 'Good Schizophrenic'?", by Esmé Weijun Wang for Buzzfeed (2016-04-07). When we talk about mental illness in an attempt to destigmatize it, we need to go further than drawing a line between nice, friendly mentally ill people who are "only" anxious and depressed, and scary, dangerous mentally ill people who are schizophrenic.

Lecture by John Darnielle at Calvin College's Festival of Faith and Writing (audio, 2016-04-14). I would listen to John Darnielle talk about pretty much anything for 47 minutes, so I don't really know how to sell you on this if you wouldn't.
tim: A bright orange fish. (fish)
2016-06-20 07:41 pm
Entry tags:

[Linkspam] Monday, June 20

I'm going to try doing a weekly linkspam post, because why not? Maybe it'll motivate me to get through my Pinboard backlog.

  • "Parents, right? Psh, who needs em!", by Talia Jane (2016-06-20). A hot personal take on the silencing of people who were parented incompetently. "Why would you care about the rocky nature of my personal life? Well, why do you think I’d care about how healthy your personal life is? Why would you think I’d enjoy seeing happy photos of you with your parents, outside of the fact that I might be happy you’re not curled up in a ball crying for six hours?"
  • Unsuck It: A bullshit-business-jargon-to-English translator (occasional ableism but on the whole pretty on-the-mark). "wellness: A notional substitute for a decent health insurance plan. Frequently includes chipper admonishments to do obvious things, such as get off your ass and walk or eat more vegetables."
  • "creativity and responsibility", by [personal profile] graydon2 (2016-06-17). On "creativity" as applied to software development: "I think 'creative' also serves as a rhetorical dodge about expectations, or perhaps more bluntly: responsibilities." Tangentially, this post reminds me of a quote from Samuel Delany that I love:
    The sad truth is, there’s very little that’s creative in creativity. The vast majority is submission – submission to the laws of grammar, to the possibilities of rhetoric, to the grammar of narrative, to narrative’s various and possible restructurings. In a society that privileges individuality, self-reliance, and mastery, submission is a frightening thing.

    (I think the software industry could do with a bit more submission to models, and there's probably something to be teased out here about why some people are so resistant to type systems and other forms of static verification.)
  • "To Keep The Blood Supply Safe, Screening Blood Is More Important Than Banning Donors", by Maggie Koerth-Baker for FiveThirtyEight (2016-06-18). We've all known for a long time that the ban on MSM donating blood is based in homophobia and not science, but it's always nice to see more evidence of that.
  • "The Myth of the Violent, Self-Hating Gay Homophobe", by Cari Romm for New York magazine (2016-06-16). No, homophobes aren't all (or even mostly) closeted self-hating queers. Hetero people really do hate us that much.
  • Interview With a Woman Who Recently Had an Abortion at 32 Weeks, by Jia Tolentino for Jezebel (2016-06-15). Long, harrowing interview with a woman who had a very late-term abortion. Makes me feel glad that there are still a few doctors courageous enough to provide this care, and sad that so many have been terrorized out of doing it.
  • "How Bernie Sanders Exposed the Democrats’ Racial Rift", by Issac J. Bailey for Politico (2016-06-08). "To minority voters, Trump’s candidacy feels like an existential threat. It’s one thing for Republicans to either ignore or embrace his racism; the party already seems unwilling or incapable of making the kinds of adjustments it must to attract more non-white voters. It’s quite another for white Democrats to not appreciate how liberal minorities feel about the possibility of a Trump presidency and what that would say about the state of racial progress in America. It would be a slap in the face, the latest sign that a kind of white privilege—throwing a temper tantrum because they don’t get their way despite how much it hurts people of color—is deeply rooted within liberal, Democratic ranks as well."
  • "The Ethics of Mob Justice", by Sady Doyle for In These Times (2013-11-08). Unfortunately, relevant again. "So we’re left with upholding structural principles, and this brings me to the Internet’s other poisoned gift to social justice: Even as it enhances our ability to censure those who violate the social contract, it makes the individual members of that society more visible, warts and all. Where the radicals of previous generations could spout high-minded rhetoric about the Common Man, Womankind or the Human Spirit while interacting mainly with the limited circle of people they found tolerable, we contemporary activists have to uphold our principles while dealing with the fact that actual common men, women and human spirits are continually being presented to us in harshly lit, unflattering close-up..." (I don't read this article as being opposed to public shaming, and I'm certainly not. Just as taking a skeptical eye to the targeting of women for having unacceptable feelings in public.)
tim: Solid black square (black)
2016-06-13 09:17 pm

A short, inexhaustive list of things I am tired of hearing

CW: violence, homophobia, victim-blaming

Read more... )


"I am so tired of waiting.
Aren’t you,
for the world to become good
and beautiful and kind?
Let us take a knife
and cut the world in two —
and see what worms are eating
at the rind."
-- Langston Hughes

tim: "System Status: Degraded" (degraded)
2016-06-11 07:12 pm

Broken Metaphors, Flawed Technology

Language affects thought, and part of why science isn't objective is that communicating scientific knowledge relies on language, which is always imprecise and governed by politics and culture.

In "The Egg and the Sperm", Emily Martin wrote about how the language used to describe human reproduction distorted the truth. Scientists, mostly cis men, were biased towards seeing sperm as active penetrators as the passive egg. In fact, as Martin detailed, eggs do a lot of active work to reject weak sperm and entice strong sperm. (Of course, even the metaphor of "weak" or "strong" sperm reflects socially mediated beliefs.)

Another example from reproduction is the misunderstanding of the biological function of menstruation that also arose from sociopolitical biases about gender. In a 2012 journal article, Emera, Romero and Wagner posited that the function of menstruation has been misunderstood due to sexist beliefs that bodies coded as female are intrinsically nurturing: the endometrial lining was previously construed as the uterus creating a nurturing environment for a potential embryo, where in fact, it might be more accurate to view it as a hostile environment that only the strongest embryos can survive (there's that "strong/weak" political language again.) I'm not qualified to assess on the accuracy of Emera et al.'s idea, but I am qualified to observe that assessing its validity has been so far hindered by the misapplication of gender stereotypes to biology.

Yet another example is that of same-sex sexual behavior in non-human animals; Bruce Bagemihl's book Biological Exuberance details the history of (again, mostly heterosexual cis male) scientists getting itgrievously wrong about the nature and function of sexual behavior. It would be funny if it wasn't so harmful. Just one example is the publication of a paper, in 1981, entitled "Abnormal Sexual Behavior of Confined Female Hemichienus auritus syriacus [Long-eared Hedgehogs]". It's not objective, rational, or scientific to label hedgehog sex as "abnormal" -- rather, it reflects social and political biases. And in that case (and many similar cases), politics kept scientists from understanding animal behavior.

In all of these cases, bad metaphors kept us from seeing the truth. We used these metaphors not because they helped us understand reality, but because they were lazily borrowed from the society as it was at the time and its prejudices. This is why scientific research can never be fully understood outside the context of the people who produced it and the culture they lived in.

Master/Slave: a Case Study

In computer science and electrical engineering, the term "master/slave" has been used in a variety of loosely related ways. A representative example is that of distributed databases: if you want to implement a database system that can scale up to handling a lot of queries, it might occur to you to put many servers around the world that have copies of the same data, instead of relying on just one server (which could fail, or could become slow if a lot of people start querying it all at once) in one physical location. But then how do you make sure that the data on all of the servers are consistent? Imagine two different whiteboards, one in the computer science building at Berkeley and one in the computer science building at MIT: there's no reason to assume that whatever is written on the two whiteboards is going to be the same unless people adopt a mechanism for communicating with each other so that one whiteboard gets updated every time the other does. In the context of databases, one mechanism for consistency is the "master/slave" paradigm: one copy of the database gets designated as the authoritative one, and all the other copies -- "slaves" -- continuously ask the master for updates that they apply to themselves (or alternately, the master publishes changes to the slaves -- that's an implementation detail).

A lot of the historical background behind the use of "master/slave" in a technical context already got covered by Ron Eglash in his 2007 article "Broken Metaphor: The Master-Slave Analogy in Technical Literature". Unfortunately, you won't be able to read the article (easily) unless you have access to JSTOR. Eglash examined early uses of "master/slave" terminology carefully and pointed out that "master/slave" entered common use in engineering long after the abolition of slavery in the US. Thus it can't be defended as "a product of its time." He also points out that "master/slave" is also an inaccurate metaphor in many of the technical contexts where it's used: for example, for a system with multiple hard drives where the "master" and "slave" drives merely occupy different places in the boot sequence, rather than having a control or power relationships.

But I think the most interesting point Eglash makes is about the difference between power as embodied in mechanical systems versus electrical systems:

A second issue, closely related, is the difference that electrical signals make. Consider what it meant to drive a car before power steering. You wrestled with the wheel; the vehicle did not slavishly carry out your whims, and steering was more like a negotiation between manager and employee. Hence the appropriateness of terms such as "servo-motor" (coined in 1872) and "servomechanism" (1930s): both suggest "servant," someone subordinate but also in some sense autonomous. These precybernetic systems, often mechanically linked, did not highlight the division of control and power. But electrical systems did. Engineers found that by using an electromagnetic relay or vacuum tube, a powerful mechanical apparatus could be slaved to a tiny electronic signal. Here we have a much sharper disjunction between the informational and material domains. And with the introduction of the transistor in the 1950s and the integrated circuit in the 1960s, the split became even more stark.

This coupling of immense material power with a relatively feeble informational signal became a fundamental aspect of control mechanisms and automation at all scales...

In light of Eglash's observation, it's worth looking harder at why some engineers are so attached to the "master/slave" terminology, aside from fear of change. The "immense material power" of an electronic signal can't be observed directly. Do engineers in a white-male-dominated field like talking about their systems in terms of masters and slaves because they need to feel like they're somebody's master? Does it make them feel powerful? Given that engineering has become increasingly hostile to people who aren't white and male as it has become more dependent on leveraging smaller and smaller amounts of (physical) power to do more and more, I think it's worth asking what work metaphors like "master/slave" do to make white male engineers feel like they're doing a man's job.

Bad Metaphors

"Master/slave" both serves a psychological function and reflects authoritarian politics, even if the person using that term is not an authoritarian. No one needs to consciously be an authoritarian, though, for authoritarianism to distort our thinking. Language derived from societies organized around a few people controlling many others will affect how systems get designed.

A master/slave system has a single point of failure: what if the master fails? Then there's no longer any mechanism for the slaves to keep each other consistent. There are better solutions, which constitute an open research topic in distributed systems -- discussing them is beyond the scope of this blog post, but I just want to point out that the authoritarian imagination behind both societies organized around slavery (we still live in one of those societies, by the way, given the degree to which the economy depends on the prison industry and on labor performed by prisoners) impoverishes our thinking about systems design. It turns out that single points of failure are bad news for both computer systems, and societies.

I conjecture that the master-slave metaphor encourages us to design systems that have single points of failure, and that the metaphor is so compelling because of its relationship with the continued legacy of slavery. I don't claim to be certain. People who design decentralized, peer-to-peer systems may not be any more likely to have egalitarian politics, for all I know. So I'm asking a question, rather than answering one: do fascists, or people who haven't examined their latent fascism, build fragile systems?

Names are important. Lazy evaluation, for example, wasn't too popular when it was only known by the name of "cons should not allocate." So master/slave is worth abandoning not just because the words "master" and "slave" evoke trauma for Black Americans, but also because flawed thinking about societies and flawed thinking about technology are mutually self-reinforcing.

Good metaphors have the power to help us think better, just as bad ones can limit our imagination. Let's be aware of what shapes our imagination. It's not "only words" -- it's all words, and people who write software should understand that as well as anyone. Metaphors are powerful. Let's try to be aware of how they affect us, and not suppose that the power relationship between people and words only goes one way.


Do you like this post? Support me on Patreon and help me write more like it.

tim: text: "I'm not offended, I'm defiant" (defiant)
2016-06-09 08:25 pm
Entry tags:

Against Education: A "Trans 101" 101

Arguing over the terms of reform means trying to get people to understand complexity. It violates the old adage that in politics when you are explaining you are losing. Better to let the other side explain complex formulae while you line up behind an easily articulated view.
-- Michael J. Graetz and Ian Shapiro, Death by a Thousand Cuts: The Fight over Taxing Inherited Wealth
"Transphobia comes from ignorance. Cis people treat trans people badly because they just don't understand gender. If we take the time to educate them, it'll pay off in respect."

That's my impression of the premise behind most "trans 101" workshops, handouts, and books that I've seen. I think the premise is flawed, because asserting boundaries is incompatible with education. This is not to say that education is never necessary, just that exchange of ideas and boundary-setting shouldn't be intermingled freely, much as developing software and doing code review -- or writing a book and editing it -- are different activities. While I suspect what I'm about to say applies to other social power gradients besides just trans/cis, I'm going to focus here on "trans 101" education.

I believe education is extremely oversold as a means for effecting change. You cannot convince people that you are in possession of facts and truths (borrowing Rebecca Solnit's words) while you are educating them. And in the case of "trans 101" education, what we need to teach people is exactly that: that trans people are reliable narrators of our own life stories. But in order for us to teach people what they need to know, they have to believe it already! This is why the ubiquitous advice to "educate people before you get angry at them" is as ineffective as it is smarmy: you can't educate someone into treating you as a person.

"Trans 101" workshops, on the other hand, are situations where someone or a group of people (sometimes a trans person, sometimes a cis person, sometimes a mixed group) has volunteered to do the work of educating in a structured and planned way. This isn't like randomly telling people on the Internet that they should educate strangers for free -- there's a better return on investment, and it's not something people are coerced into doing.

In practice, though, most "trans 101" content I've seen, well-intentioned as it is, is fundamentally flawed. "Trans 101" materials often rely on infographics like various versions of the "Genderbread Person" diagram, and these pictures illustrate the fundamental flaws of the educational approach. Rather than embedding any version of that diagram in this post (bad publicity is still publicity, after all), I'll defer to an illustrated critique of the 'Genderbread Person' trope that articulates why all of the diagrams are reductive and misleading.

Rather than teaching cis people what sex is, or what gender is, or about the difference between gender identity, expression, and role (I can never remember what those all mean anyway), or what "performativity" means, you could save everybody a lot of time and set a boundary, specifically: "Everyone has the right to have their sex and gender, as self-defined at a given moment in time, recognized as valid. If you are a respectful person, you will respect that right and not cross a boundary by denying the validity of someone else's self-defined sex or gender." Here's how.

Tell, Don't Ask

A hidden assumption behind most "trans 101" content is that the educator's job is to persuade. It goes without saying in much trans 101 content that the speaker (if trans) is asking the audience for permission to be a person, or that the speaker (if cis) is trying to explain to the audience why they should treat trans people as people. No matter who's saying it, it's self-undermining. If you expect to be treated as a person, you don't ask for permission to be one.

"Meeting people where they are" is a commonly cited reason to tone down or simplify discussion of boundaries and self-determination in "trans 101" content. I think most people grasp the basic concept of boundaries, at least those who are old enough to have learned to not grab the other kids' toys and that you don't get to pull your mom's hair just because you want to. So if we "meet people where they are" on the common ground of boundaries, we'll share the understanding that boundaries are not negotiable and require no justification. Justifying a statement implies it's not a boundary -- it implies that you can negotiate or debate with me on whether or not I'm a person. Actually, I know more than you do about what my subjective experience is; your opinion isn't equally valid there.

I think the premise that "meeting people where they are" requires a great deal of explanation arises partially from the difficulty of functioning in a system where it's still not widely accepted that everyone gets to have bodily autonomy. Disability, children's rights, the right to an abortion, sexual assault, or consent to being assigned a sex/gender, are all examples where the conditional or contingent granting of bodily autonomy causes significant pain.

So stating boundaries isn't easy. But piling on the explanations and justifications doesn't help either. You don't take power by asking for permission. You don't demand respect by asking for permission. And there's no "please" in "I am a human being, and you had better treat me as one."

Eschew Obfuscation

You know those people who ask for a checklist, right? "Give me a list of words I should avoid using, so that I can be sure that no one will ever get mad at me again. If they get mad, I'll tell them you gave me the list and they should get mad at you instead." A lot of "trans 101" content panders to the desire to avoid doing hard interpersonal work yourself -- to formalize and automate empathy. Unfortunately, that is also self-defeating. Ideally, a "trans 101" talk should provide as few rules as possible, because checklists, flowcharts, and other rule-based approaches to respecting other people are just another site for people to exploit and search for loopholes.

The flowchart approach goes hand-in-hand with the peddling of various oversimplified models of sex and gender that have the supposed benefit from being different from the one that white American children were taught in elementary school in the fifties (that boys have a penis and grow up to be men, girls have a vagina and grow up to be women, and there's nobody else.) But trans people don't get oppressed because cis people don't sufficiently understand the nuances of sex and gender. Rather, cis people construct models of sex and gender that justify past oppression and make it easier for that oppression to continue. For example, teaching people that sex is "biological" and gender is in your mind doesn't make them any more likely to treat trans people as real people. We see this in the ongoing legislative attacks on trans people's right to use public accommodations: cis people who have learned that "gender identity" is self-determined while other people determine what your biological sex is have adapted to that knowledge by framing their hateful legislation in terms of "biological sex."

Remodeling sex and gender doesn't fix transphobia because a flawed model didn't cause it. You can't address fear with facts. Models are interesting and potentially useful to trans people, people who are questioning whether they're trans, and people who study science, culture, and the intersections between them. Everybody else really doesn't need to know.

Compare how pro-choice rhetoric fails when it revolves around enumerating reasons why someone should be allowed to have an abortion: what if you were a victim of rape or incest, or young, or sick, or you can't afford to raise a child, what if, indeed. What if nobody has the right to be in somebody else's body without that person's consent? You don't need a reason or an explanation for wanting to keep somebody else out of your body -- dwelling in your body is reason itself. Likewise, we don't need to furnish reasons or explanations for why you need to use the names and pronouns for someone that are theirs. We just need to say you must.

Know Your Audience

In "The Culture of Coercion", I drew a line between people who relate to others through coercion and those who build relationships based on trust:
  • A person operating on trust wants to be respectful, even if they don't always know how. These people are who "Trans 101" workshops try to reach. They are the majority. You don't need to give them reams of scientific evidence to convince them to be -- they decided to be respectful a long time ago. You don't have to bring reams of scientific evidence to convince them to respect. It muddies the waters when you do.
  • A person who operates on coercion isn't really sold on that whole "everyone is human" concept. Workshops cannot persuade these people. If someone doesn't accept the reality of others' personal boundaries, no amount of evidence or civil discussion will change that. Firmer enforcement of those boundaries will, and an educational workshop is not the tool for enforcing those boundaries.

Education requires being really, really clear on who you're trying to reach. And unfortunately, even trust-based people are likely to try to game the system when given a flowchart on how to be respectful -- well-intentioned people still look for ways to avoid feeling like they did something wrong, because because narcissistic injury is uncomfortable. The only circumstance under which you can teach is when your audience wants to know what your boundaries are, so they can respect them. So tell them!

Against Education?

I'm not really against education. Consciousness-raising, cognitive liberation, freeing your mind, getting woke, or whatever you want to call it is a prerequisite for organizing for change, especially when you're trans and are systematically denied language for describing who you are. But that is self-directed education, and I think that intentionally directing your education inwards -- in the company of like-minded people, with the goal of discovering the power you already have -- is the only way education changes the world.

In any case, education can't take place without boundaries -- classrooms have ground rules. Ask any teacher.


Do you like this post? Support me on Patreon and help me write more like it.

tim: text: "I'm not offended, I'm defiant" (defiant)
2016-06-06 08:43 am
Entry tags:

Opinions Are Abundant and Low-Value

[twitter.com profile] moscaddie once wrote, "Dick is abundant and low-value." As she acknowledged later, this statement is cissexist, but I can borrow the phrasing without endorsing the cissexism:

Opinions are abundant and low-value.

[twitter.com profile] _danilo summarizes the co-optation of "diversity" in this Twitter thread: he observes that those who feel "marginalized by those who live in reality" demand inclusion because of "diversity of opinion."

Contorting "diversity" to demand more airtime for already-well-known beliefs relies on a fundamental misunderstanding of diversity. Diversity is a well-intentioned (if flawed) intellectual framework for bringing marginalized beliefs to the center. "Diversity of opinion" is a perversion of these good intentions to reiterate the centering of beliefs that are already centered.

Failure to explicitly define and enforce boundaries about which opinions a community values has the effect of tacitly silencing all but a very narrow range of opinions. That's because speech has effects: voicing an opinion does things to other people, or else you wouldn't bother using your time and voice to do so. (Stanley Fish made this point in his essay "There's No Such Thing as Free Speech, and It's a Good Thing, Too" [PDF link].) Everybody thinks some opinions are harmful and should be suppressed -- invoking "diversity of opinion" is a derailing tactic for disagreements about which opinions those are.

We do not need more opinions. We need more nuanced, empathetic conversations; more explicit distinguishing between fact and opinion; and more respect for everyone's expert status on their own lived experience. People who say they want more opinions actually want fewer opinions, because they are invariably arguing for already-privileged opinions to receive even more exposure. We do not need to value diversity of opinion; there are other values we can center to guide us closer to truth.
Read more... )


Do you like this post? Support me on Patreon and help me write more like it.

tim: text: "I'm not offended, I'm defiant" (defiant)
2016-05-31 11:59 am
Entry tags:

Reporting suspicious activity in North Carolina

Edited to add: The quote turns out to be from a fake news site, but calling the governor's office can't hurt!

At a press conference today, North Carolina Gov. Pat McCrory took further steps to ensure that his controversial bill, HB2, will be upheld when it comes to law enforcement. McCrory announced that his office has setup a 24-hour hotline for individuals to call if they witness someone not abiding by the new law.

“If you see a woman, who doesn’t look like a woman, using the woman’s restroom, be vigilant, call the hotline, and report that individual.” McCrory told reporters. “We need our state to unite as one if we’re going to keep our children safe from all the sexual predators and other aberrant behavior that is out there.”

Tom Downey, a spokesman for the Governor’s Office, explained the new hotline to reporters.

“Beginning today, individuals that notice any kind of gender-suspicious activity in the men’s or women’s restrooms are encouraged to call the new ‘HB2 Offender Hotline’,” Horner said. “We encourage North Carolina’s residents to take photographs and report as much detail as possible when calling. With the information gathered from this hotline, we’ll be working closely with local law enforcement agencies to make sure this law is enforced and those who break the law see jail bars. We are sending a clear message to all the transsexuals out there; their illegal actions and deviant behavior will no longer be tolerated in the state of North Carolina."

[...]
To report suspicious bathroom activity, North Carolina residents can call the ‘HB2 Offender Hotline’ at 1-800-662-7952. For individuals living outside of North Carolina, please call (919) 814-2000. To file a complaint after normal business hours, call (919) 814-2050 and press option 3.


-- ABC News report


(Note: I struck out the 919-814-2000 number. It doesn't accept voicemail and when I called during East Coast business hours, I got a recording saying to call back during business hours. The 800 number appears to reject calls from non-North-Carolina area codes.)

I encourage you to use your own words, but if you don't know what to say, here's a script you can use when leaving a message at the 919 number, or both numbers if you have a North Carolina phone number you can call from. I adapted this script from a post on Tumblr by [tumblr.com profile] lemonsharks.

I am calling to report suspicious activity.

It is very suspicious that the state of North Carolina is spending money enforcing a law whose sole purpose is to harass trans people and stop them from participating in public life. This would be suspicious even if North Carolina didn’t have a child poverty rate of over 25%. 

It’s suspicious that people who are not trans are enacting this kind of legislative violence against trans people. It’s suspicious that they have not reflected on their own fear, asked themselves what they are so afraid of, rather than projecting their unexamined fear outward onto vulnerable people.

I think you need to investigate this immediately. Thanks for your attention. Goodbye.
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
2016-05-16 11:52 am
Entry tags:

Fundraising for Code2040: recap

As per this post from me a month ago, I said that I would donate $5 for each harassing tweet I received as part of the SJWList harassment campaign. I received 5 such tweets and have donated $25 to http://www.code2040.org/. The combined impact of this donation will be $125: $25 from me, $25 each from [twitter.com profile] bcjbcjbcj and [twitter.com profile] cbeckpdx, $25 from an anonymous donor, and a $25 match from my employer.

A few harassing tweets can go a long way! (Not meant as encouragement to harass people :)
tim: "System Status: Degraded" (degraded)
2016-05-14 04:27 pm
Entry tags:

Disowning Desire

[CW: discussion of rape, cissexism, transmisogynistic violence]

Disowning desire: how cis people use deception, contamination, and stigma to deny their attraction to trans people

The biggest threat to cisnormativity is the idea that a trans person, particularly a trans person who was coercively assigned male at birth, could be attractive.

The social stigmatization of trans people creates a positive feedback loop of attraction and desire in cis people's minds. A minor manifestation of that feedback loop is the OkCupid question that has ruined more of my potential relationships than I care to count: "When is it most appropriate for a transgender person to reveal their transgender status to a match?" [Screenshot of an OkCupid question; the text of the question and answers are in the body text.] The answer choices are, "It should be clearly stated in their profile," "During messaging prior to meeting in person," "Prior to having intimate contact or sex," and "Never." Absent is the answer I want to give: "Only if and when the particular trans person in question wants to and feels it is safe to do so."

Typically, cis people frame their answers to this question (if asked to justify their answers, which they seldom are) as being about "honesty." A cis person might say, "I have the right to know important parts of someone's history before I get into a relationship with them." Absent is an explanation of why it's only the parts of someone's history relating to the sex they were coercively assigned at birth that are relevant, and why no other aspect of someone's history requires this level of transparency.

Platitudes about "the right to know" or "honesty in relationship" are tidy disguises for a messy collection of fears, insecurities, and desires. I think they serve to conceal the work that the OkCupid question does: the work of shifting emotional labor off people in socially privileged classes, and onto people in socially disprivileged classes.

In a (current or nascent) relationship, who does the work? Who takes risks? Should a cis person risk embarrassing another cis person by asking, "Are you cis?" on a date or in a message thread on a dating site? Or should a trans person (in practice, usually a trans woman) take the initiative in disclosing that they are trans, thereby taking on the risk of being harmed or killed? How much bodily harm does a trans person need to be willing to risk in order to spare a cis person from embarrassment?

Read more... )


Do you like this post? Support me on Patreon and help me write more like it.

tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
2016-04-14 09:24 am
Entry tags:

Fundraising for Code2040

If you missed it, I (and several hundred of my colleagues) are now on a blacklist because we signed a statement saying that we disagreed with the LambdaConf functional programming conference's decision to host a white supremacist speaker.

I filed a complaint to the Internet service provider (Alchemy Communications, a partial subsidiary of Dreamhost Communications according to Alchemy's home page) for the blacklist, since its purpose is clearly to incite harassment and violence against individuals. In response, an employee of Alchemy posted my personally identifying information to 8chan, and I'm now being harassed on Twitter with tweets @-mentioning both me and the CEO of the company I work for.

From now until May 15, I'll be donating $5 to Code2040 for every harassing tweet I receive as part of this campaign. (I am the final arbiter of which tweets are harassing and are part of this campaign, for the purpose of this fundraiser.) I'll make the final donation after May 15 and post receipts. Donations will be matched quadruply by an anonymous donor who will match up to $100; [twitter.com profile] bcjbcjbcj, who will match up to $250; and [twitter.com profile] cbeckpdx, who will match up to $150. That means the first 20 harassing tweets (I've gotten 4 so far) will count for $20 each, the next 10 will count for $15 each, the next 20 will count for $10 each, and all remaining tweets up to May 15 will count for $5 each.

What better way to deal with white supremacist harassment than to support Black and Latin@ programmers? Let me know if you'd like to match donations as well.

Thanks to Kelly Ellis for the idea.
tim: Solid black square (black)
2016-04-11 10:32 am
Entry tags:

50

Debra would have been 50 today.

"you were a presence full of light upon this earth
And I am a witness to your life and to its worth"
(x)
tim: A person with multicolored hair holding a sign that says "Binaries Are For Computers" with rainbow-colored letters (binaries)
2016-04-04 09:22 am
Entry tags:

Depression is Not an Evil Monster

CW: depression, suicide

"And you can stay busy all day
He’s never going away"
-- the Mountain Goats, "Keeping House"


I've lived with depression for 24 years, more than two-thirds of my life. That's not to say that I subjectively feel depressed all the time, thankfully, just that for me, depression is a chronic illness. Sometimes, it incapacitates me. Sometimes, I have periods of time that make me ask myself, "So is this what it's like to be a normal person?". Most of the time, it's present as ambient noise that rarely quiets completely.

It is currently popular to talk about depression as a thing exterior to a person, like a virus that uses a person as a host but has no real life of its own. I guess it's popular among people with good intentions: they want to de-stigmatized depression. But the metaphor of depression as an evil monster that takes you over makes me wildly uncomfortable. The evil-monster metaphor frames depression as a thing a person has, like a suitcase, that can be put down -- not an intrinsic part of a person. Alternately, it frames depression as being like a demon on your shoulder, whispering lies in your ear: it's a bad part of yourself, it's your "jerkbrain". It's an interloper that is occupying your mind and body with no regard for you.
Read more... )


Do you like this post? Support me on Patreon and help me write more like it.

tim: text: "I'm not offended, I'm defiant" (defiant)
2016-03-10 11:05 am
Entry tags:

The Culture of Coercion

Much of the conflict between "social justice warriors" and their antagonists arises from a conflict between mutual trust as a political foundation, and coercion (arising from distrust) as a political tactic. (I previously wrote about this conflict in "The Christians and the Pagans".)

People who are used to operating on coercion assume the worst of others and both expect to be coerced into doing good, and expect that they will have to coerce others in order to get what they want or need. People who are more used to operating on trust assume that others will usually want to help and will act in good faith out of a similar desire for mutual trust.

I want to be clear that when I talk about coercion-based people, I'm not talking about sociopaths or any other category that's constructed based on innate neurological or psychological traits. In fact, people might act coercion-based in one situation, and trust-based in another. For example, a white feminist might act like they're trust-based in a situation that involves gender inequality, but coercion-based when it comes to examining racism. And I'm also not saying people never cross over from one group into another -- I think it can happen in both directions. But to stop relying on coercion requires work, and there are few incentives to do that work. There are, however, a lot of incentives to give up trust in favor of coercion (or at least pretend to) and give up your empathy.

If you assume the worst of other people, of course you won't be able to imagine any way to achieve your goals other than coercion. Assuming the worst isn't a character flaw -- it's taught, and thus, can be unlearned. At the same time, experience isn't an excuse for treating others badly (and people who assume the worst of others will treat others badly, partly because it helps make their assumptions self-fulfilling, removing the need for them to change their assumptions and behavior). We are all obligated to do the work that it takes to live with others while minimizing the harm that we do to them.

Read more... )


Do you like this post? Support me on Patreon and help me write more like it.

tim: "System Status: Degraded" (degraded)
2016-03-07 11:06 am

Yes, All Parents

CW: discussion of abuse, gaslighting, and silencing of abuse survivors
"I feel like a thing non-queer ppl seem to often not get is the importance of protecting children from their parents" -- [twitter.com profile] mcclure111 on Twitter
I was glad to read this tweet by [twitter.com profile] mcclure111 because it's a truth that's deeply known by many of us who are queer, or abuse survivors, or both. It's a truth that's as rarely stated as it is deeply known.

But the tweet provoked as much discomfort in others as relief in me. This reply is a representative example of the things people say to survivors speaking uncomfortable truths:

"(kids definitely need protecting from parental harm, but many parents I know, including my own, are Really Good)"

"Many parents are good" is a statement devoid of denotation. When somebody utters a sequence of words that say nothing, I have to ask what they are trying to do by saying those words. Are they trying to take control of the conversation? Are they putting the speaker in their place? Are they expressing discomfort at having their belief in a just world disrupted? Whatever the motivation, direct verbal communication isn't it.

"Many parents... are Really Good" may seem shallow and obvious, but when I ask what those words do rather than what they mean, there's a lot to unpack. Ultimately, "many parents are good" has little to do with the character of the unnamed individuals being defended and much to do with defending the practice of authoritarian parenting.

Read more... )


Thanks to the people who read a draft of this post and contributed feedback that helped me make it better, particularly [twitter.com profile] alt_kia.
Do you like this post? Support me on Patreon and help me write more like it.

tim: text: "I'm not offended, I'm defiant" (defiant)
2016-02-11 07:21 pm
Entry tags:

On "male" and "female" vs. "man" and "woman"

The question of whether "male" means something different from "man", and whether "female" means something different from "woman", has come up in two different situations for me in the past few weeks. I like being able to hand people a link rather than restating the same thing over and over, so here's a quick rundown of why I think it's best to treat "male" as the adjectival form of "man" and "female" as the adjectival form of "woman".

I prioritize bodily autonomy and self-definition. Bodily autonomy means people get to relate to their bodies in the way that they choose; if we're to take bodily autonomy seriously, respecting self-definition is imperative. If you use language for someone else's body or parts thereof that that person wouldn't use for themselves, you are saying that you know better than they do how they should relate to their body.

For example: I have a uterus, ovaries, and vagina, and they are male body parts, because I'm male. Having been coercively assigned female at birth doesn't change the fact that I've always been male. Having an XX karyotype doesn't make me female (I'm one of the minority of people that actually knows their karyotype, because I've had my DNA sequenced). Those are male chromosomes for me, because they're part of me and I'm male. If I ever get pregnant and give birth, I'll be doing that as a male gestator.

I don't know too many people who would want to be referred to as a male woman or a female man, so i'm personally going to stick to using language that doesn't define people by parts of their bodies that are private. And no, you can't claim parts of my body are "female" without claiming I am - if they're female, whose are they? Not mine.

If someone does identify as a male woman or as a female man, cool. The important thing is that we use those words to describe them because those are the words they use to describe themself rather than because of what sociopolitical categories we place them in based on their body parts.

For extra credit, explain why the widespread acceptance of the sex-vs.-gender binary is the worst thing that ever happened to transsexual people.

Further reading: [personal profile] kaberett, Terms you don't get to describe me in, #2: female-bodied.
Do you like this post? Support me on Patreon and help me write more like it.
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
2016-02-08 03:06 pm

Support Erica's Spoon Fund! (Second nag)

Bunnies in wine glasses!

(from [tumblr.com profile] xxdaybreak)

Now that I've got your attention: my friend Erica is raising money for much-needed trauma therapy and could use your help. I've known her IRL for ten years and can vouch for her as much as I can for anyone in the world; she's a real person and the money will go to do what it says on the tin. Erica is someone who's supported me in a myriad of ways, and I'm not the only one, so if you help her, you'll be helping me. She just needs $145 more in order to meet her goal.

If you have a couple bucks to spare: do it to support an intersectional social justice writer, do it to support a disabled queer trans woman of color, do it to redistribute wealth, or just do it because that would make me happy. Here's the link to her fundraiser. I reserve the right to keep nagging you all until she meets her goal.

Edit: Erica reached her goal! Thanks to those who donated.
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
2016-02-02 11:05 pm

Support Erica's Spoon Fund!

Here's a bunny!

(source: [twitter.com profile] carrot666 by way of [tumblr.com profile] kaberabbits)

Now that I've got your attention: my friend Erica is raising money for much-needed trauma therapy and could use your help. I've known her IRL for ten years and can vouch for her as much as I can for anyone in the world; she's a real person and the money will go to do what it says on the tin. Erica is someone who's supported me in a myriad of ways, and I'm not the only one, so if you help her, you'll be helping me.

If you have a couple bucks to spare: do it to support an intersectional social justice writer, do it to support a disabled queer trans woman of color, do it to redistribute wealth, or just do it because that would make me happy. Here's the link to her fundraiser. I reserve the right to keep nagging you all until she meets her goal.
tim: "System Status: Degraded" (degraded)
2016-02-01 03:03 pm

The Democratization of Defamation: Part 4

This post is the last in a 4-part series. The first three parts were "Defame and Blame", "Phone Books and Megaphones," and "Server-Side Economics."

Harassment as Externality

In part 3, I argued that online harassment is not an accident: it's something that service providers enable because it's profitable for them to let it happen. To know how to change that, we have to follow the money. There will be no reason to stop abuse online as long as advertisers are the customers of the services we rely on. To enter into a contract with a service you use and expect that the service provider will uphold their end of it, you have to be their customer, not their product. As their product, you have no more standing to enter into such a contract than do the underground cables that transmit content.

Harassment, then, is good for business -- at least as long as advertisers are customers and end users are raw material. If we want to change that, we'll need a radical change to the business models of most Internet companies, not shallow policy changes.

Deceptive Advertising

Why is false advertising something we broadly disapprove of -- something that's, in fact, illegal -- but spreading false information in order to entice more eyeballs to view advertisements isn't? Why is it illegal to run a TV ad that says "This toy will run without electricity or batteries," but not illegal for a social media site to surface the message, "Alice is a slut, and while we've got your attention, buy this toy?" In either case, it's lying in order to sell something.

Advertising will affect decision-making by Internet companies as long as advertising continues to be their primary revenue source. If you don't believe in the Easter Bunny, you shouldn't believe it either when executives tell you that ad money is a big bag of cash that Santa Claus delivers with no strings attached. Advertising incentivize ad-funded media to do whatever gets the most attention, regardless of truth. The choice to do what gets the most attention has ethical and political significance, because achieving that goal comes at the expense of other values.

Should spreading false information have a cost? Should dumping toxic waste have a cost? They both cost money and time to clean up. CDA 230 protects sites that profit from user-generated content from liability from paying any of the costs of that content, and maybe it's time to rethink that. A search engine is not like a common carrier -- one of the differences is that it allows one-to-many communication. There's a difference between building a phone system that any one person can use to call anyone else, and setting up an autodialer that lets the lucky 5th callee record a new message for it.

Accountability and Excuses

"Code is never neutral; it can inhibit and enhance certain kinds of speech over others. Where code fails, moderation has to step in."
-- Sarah Jeong, The Internet of Garbage
Have you ever gone to the DMV or called your health insurance company and been told "The computer is down" when, you suspected, the computer was working fine and it just wasn't in somebody's interest to help you right now? "It's just an algorithm" is "the computer is down," writ large. It's a great excuse for failure to do the work of making sure your tools don't reproduce the same oppressive patterns that characterize the underlying society in which those tools were built. And they will reproduce those patterns as long as you don't actively do the work of making sure they don't. Defamation and harassment disproportionately affect the most marginalized people, because those are exactly the people that you can bully with few or no consequences. Make it easier to harass people, to spread lies about them, and you are making it easier for people to perpetuate sexism and racism.

There are a number of tools that technical workers can use to help mitigate the tendency of the communities and the tools that they build to reproduce social inequality present in the world. Codes of conduct are one tool for reducing the tendency of subcultures to reproduce inequality that exists in their parent culture. For algorithms, human oversight could do the same -- people could regularly review search engine results in a way that includes verifying factual claims that are likely to have a negative impact on a person's life if the claims aren't true. It's also possible to imagine designing heuristics that address the credibility of a source rather than just its popularity. But all of this requires work, and it's not going to happen unless tech companies have an incentive to do that work.

A service-level agreement (SLA) is a contract between the provider and a service and the services' users that outlines what the users are entitled to expect from the service in exchange for their payment. Because people pay for most Web services with their attention (to ads) rather than with money, we don't usually think about SLAs for information quality. For an SLA to work, we would probably have to shift from an ad-based model to a subscription-based model for more services. We can measure how much money you spend on a service -- we can't measure how much attention you provide to its advertisers. So attention is a shaky basis on which to found a contract. Assuming business models where users pay in a more direct and transparent way for the services they consume, could we have SLAs for factual accuracy? Could we have an SLA for how many death threats or rape threats it's acceptable for a service to transmit?

I want to emphasize one more time that this article isn't about public shaming. The conversation that uses the words "public shaming" is about priorities, rather than truth. Some people want to be able to say what they feel like saying and get upset when others challenge them on it rather than politely ignoring it. When I talk about victims of defamation, that's not who I'm talking about -- I'm talking about people against whom attackers have weaponized online media in order to spread outright lies about them.

People who operate search engines already have search quality metrics. Could one of them be truth -- especially when it comes to queries that impinge on actual humans' reputations? Wikipedia has learned this lesson: its policy on biographies of living persons (BLP) didn't exist from the site's inception, but arose as a result of a series of cases in which people acting in bad faith used Wikipedia to libel people they didn't like. Wikipedia learned that if you let anybody edit an article, there are legal risks; the risks were (and continue to be) especially real for Wikipedia due to how highly many search engines rank it. To some extent, content providers have been able to protect themselves from those risks using CDA 230, but sitting back while people use your site to commit libel is still a bad look... at least if the targets are famous enough for anyone to care about them.

Code is Law

Making the Internet more accountable matters because, in the words of Lawrence Lessig, code is law. Increasingly, software automates decisions that affect our lives. Imagine if you had to obey laws, but weren't allowed to read their text. That's the situation we're in with code.

We recognize that the passenger in a hypothetical self-driving car programmed to run over anything in its path has made a choice: they turned the key to start the machine, even if from then on, they delegated responsibility to an algorithm. We correctly recognize the need for legal liability in this situation: otherwise, you could circumvent laws against murder by writing a program to commit murder instead of doing it yourself. Somehow, when physical objects are involved it's easier to understand that the person who turns the key, who deploys the code, has responsibility. It stops being "just the Internet" when the algorithms you designed and deployed start to determine what someone's potential employers think of them, regardless of truth.

There are no neutral algorithms. An algorithmic blank slate will inevitably reproduce the violence of the social structures in which it is embedded. Software designers have the choice of trying to design counterbalances to structural violence into their code, or to build tools that will amplify structural violence and inequality. There is no neutral choice; all technology is political. People who say they're apolitical just mean their political interests align well with the status quo.

Recommendation engines like YouTube, or any other search engine with relevance metrics and/or a recommendation system, just recognize patterns -- right? They don't create sexism; if they recommend sexist videos to people who aren't explicitly searching for them, that's because sexist videos are popular, right? YouTube isn't to blame for sexism, right?

Well... not exactly. An algorithm that recognizes patterns will recognize oppressive patterns, like the determination that some people have to silence women, discredit them, and pollute their agencies. Not only will it recognize those patterns, it will reproduce those patterns by helping people who want to silence women spread their message, which has a self-reinforcing effect: the more the algorithm recommends the content, the more people will view it, which reinforces the original recommendation. As Sarah Jeong wrote in The Internet of Garbage, "The Internet is presently siloed off into several major public platforms" -- public platforms that are privately owned. The people who own each silo own so many computing resources that competing with them would be infeasible for all but a very few -- thus, the free market will never solve this problem.

Companies like Google say they don't want to "be evil", but intending to "not be evil" is not enough. Google has an enormous amount of power, and little to no accountability -- no one who manages this public resource was elected democratically. There's no process for checking the power they have to neglect and ignore the ways in which their software participates in reproducing inequality. This happened by accident: a public good (the tools that make the Internet a useful source of knowledge) has fallen under private control. This would be a good time for breaking up a monopoly.

Persistent Identities

In the absence of anti-monopoly enforcement, is there anything we can do? I think there is. Anil Dash has written about persistent pseudonyms, a way to make it possible to communicate anonymously online while still standing to lose something of value if you abuse that privilege in order to spread false information. The Web site Metafilter charges a small amount of money to create an account, in order to discourage sockpuppeting (the practice of responding to being banned from a Web site by coming back to create a new account) -- it turns out this approach is very effective, since people who are engaging in harassment for laughs don't seem to value their own laughs very highly in terms of money.

I think advertising-based funding is also behind the reason why more sites don't implement persistent pseudonyms. The advertising-based business model encourages service providers to make it easy as possible for people to use their service; requiring the creation of an identity would put an obstacle in the way of immediate engagement. This is good from the perspective of nurturing quality content, but bad from the perspective that it limits the number of eyeballs that will be focused on ads. And thus, we see another way in which advertising enables harassment.

Again, this isn't a treatise against anonymity. None of what I'm saying implies you can't have 16 different identities for all the communities you participate in online. I am saying that I want it to be harder for you to use one of those identities for defamation without facing consequences.

A note on diversity

Twitter, Facebook, Google, and other social media and search companies are notoriously homogeneous, at least when it comes to their engineering staff and their executives, along gendered and racial lines. But what's funny is that Twitter, Facebook, and other sites that make money by using user-generated content to attract an audience for advertisements, are happy to use the free labor that a diversity of people do for them when they create content (that is, write tweets or status updates). The leaders of these companies recognize that they couldn't possibly hire a collection of writers who would generate better content than the masses do -- and anyway, even if they could, writers usually want to be paid. So they recognize the value of diversity and are happy to reap its benefits. They're not so enthusiastic to hire a diverse range of people, since that would mean sharing profits with people who aren't like themselves.

And so here's a reason why diversity means something. People who build complex information systems based on approximations and heuristics have failed to incorporate credibility into their designs. Almost uniformly, they design algorithms that will promote whatever content gets the most attention, regardless of its accuracy. Why would they do otherwise? Telling the truth doesn't attract an audience for advertisers. On the other hand, there is a limit to how much harm an online service can do before the people whose attention they're trying to sell -- their users -- get annoyed and start to leave. We're seeing that happen with Twitter already. If Twitter's engineers and product designers had included more people in demographics that are vulnerable to attacks on their credibility (starting with women, non-binary people, and men of color), then they'd have a more sustainable business, even if it would be less profitable in the short term. Excluding people on the basis of race and gender hurts everyone: it results in technical decisions that cause demonstrable harm, as well as alienating people who might otherwise keep using a service and keep providing attention to sell to advertisers.

Internalizing the Externalities

In the same way that companies that pollute the environment profit by externalizing the costs of their actions (they get to enjoy all the profit, but the external world -- the government and taxpayers -- get saddled with the responsibility of cleaning up the mess), Internet companies get to profit by externalizing the cost of transmitting bad-faith speech. Their profits are higher because no one expects them to spend time incorporating human oversight into pattern recognition. The people who actually generate bad-faith speech get to externalize the costs of their speech as well. It's the victims who pay.

We can't stop people from harassing or abusing others, or from lying. But we can make it harder for them to do it consequence-free. Let's not let the perfect be the enemy of the good. Analogously, codes of conduct don't prevent bad actions -- rather, they give people assurance that justice will be done and harmful actions will have consequences. Creating a link between actions and consequences is what justice is about; it's not about creating dark corners and looking the other way as bullies arrive to beat people up in those corners.

...the unique force-multiplying effects of the Internet are underestimated. There’s a difference between info buried in small font in a dense book of which only a few thousand copies exist in a relatively small geographic location versus blasting this data out online where anyone with a net connection anywhere in the world can access it.
-- Katherine Cross, "'Things Have Happened In The Past Week': On Doxing, Swatting, And 8chan":
When we protect content providers from liability for the content that they have this force-multiplying effect on, our priorities are misplaced. With power comes responsibility; currently, content providers have enormous power to boost some signals while dampening others, and the fact that these decisions are often automated and always motivated by profit rather than pure ideology doesn't reduce the need to balance that power with accountability.
"The technical architecture of online platforms... should be designed to dampen harassing behavior, while shielding targets from harassing content. It means creating technical friction in orchestrating a sustained campaign on a platform, or engaging in sustained hounding."
-- Sarah Jeong, The Internet of Garbage
That our existing platforms neither dampen nor shield isn't an accident -- dampening harassing behavior would limit the audience for the advertisements that can be attached to the products of that harassing behavior. Indeed, they don't just fail to dampen, they do the opposite: they amplify the signals of harassment. At the point where an algorithm starts to give a pattern a life of its own -- starts to strengthen a signal rather than merely repeating it -- it's time to assign more responsibility to companies that trade in user-generated content than we traditionally have. To build a recommendation system that suggests particular videos are worth watching is different from building a database that lets people upload videos and hand URLs for those videos off to their friends. Recommendation systems, automated or not, create value judgments. And the value judgments they surface have an irrevocable effect on the world. Helping content get more eyeballs is an active process, whether or not it's implemented by algorithms people see as passive.

There is no hope of addressing the problem of harassment as long as it continues to be an externality for the businesses that profit from enabling it. Whether by supporting subscription-based services with our money and declining to give our attention to advertising-based surfaces, or expanding legal liability for the signals that a service selectively amplifies, or by normalizing the use of persistent pseudonyms, people will continue to have their lives limited by Internet defamation campaigns as long as media companies can profit from such campaigns without paying their costs.


Do you like this post? Support me on Patreon and help me write more like it.

tim: "System Status: Degraded" (degraded)
2016-01-31 10:13 pm

The Democratization of Defamation: Part 3

This post is the third in a 4-part series. The first two parts were "Defame and Blame" and "Phone Books and Megaphones."

Server-Side Economics

In "Phone Books and Megaphones", I talked about easy access to the megaphone. We can't just blame the people who eagerly pick up the megaphone when it's offered for the content of their speech -- we also have to look at the people who own the megaphone, and why they're so eager to lend it out.

It's not an accident that Internet companies are loathe to regulate harassment and defamation. There are economic incentives for the owners of communication channels to disseminate defamation: they make money from doing it, and don't lose money or credibility in the process. There are few incentives for the owners of these channels to maintain their reputations by fact-checking the information they distribute.

I see three major reasons why it's so easy for false information to spread:

  • Economic incentives to distribute any information that gets attention, regardless of its truth.
  • The public's learned helplessness in the face of software, which makes it easy for service owners to claim there's nothing they can do about defamation. By treating the algorithms they themselves implemented as black boxes, their designers can disclaim responsibility for the actions of the machines they set into motion.
  • Algorithmic opacity, which keeps the public uninformed about how code works and makes it more likely they'll believe that it's "the computers fault" and people can't change anything.

Incentives and Trade-Offs

Consider email spam as a cautionary tale. Spam and abuse are both economic problems. The problem of spam arose because the person who sends an email doesn't pay the cost of transmitting it to the recipient. This creates an incentive to use other people's resources to advertise your product for free. Likewise, harassers can spam the noosphere with lies, as they continue to do in the context of GamerGate, and never pay the cost of their mendacity. Even if your lies get exposed, they won't be billed to your reputation -- not if you're using a disposable identity, or if you're delegating the work to a crowd of people using disposable identities (proxy recruitment). The latter is similar to how spammers use botnets to get computers around the world to send spam for them, usually unbeknownst to the computers' owners -- except rather than using viral code to co-opt a machine into a botnet, harassers use viral ideas to recruit proxies.

In The Internet of Garbage, Sarah Jeong discusses the parallels between spam and abuse at length. She asks why the massive engineering effort that's been put towards curbing spam -- mostly successfully, at least in the sense of saving users from the time it takes to manually filter spam (Internet service providers still pay the high cost of transmitting it, only to be filtered out at the client side) -- hasn't been applied to the abuse problem. I think the reason is pretty simple: spam costs money, but abuse makes money. By definition, almost nobody wants to see spam (a tiny percentage of people do, which is why it's still rewarding for spammers to try). But lots of people want to see provocative rumors, especially when those rumors reinforce their sexist or racist biases. In "Trouble at the Koolaid Point", Kathy Sierra wrote about the incentives for men to harass women online: a belief that any woman who gets attention for her work must not deserve it, must have tricked people into believing her work has value. This doesn't create an economic incentive for harassment, but it does create an incentive -- meanwhile, if you get more traffic to your site and more advertising money because someone's using it to spread GamerGate-style lies, you're not going to complain. Unless you follow a strong ethical code, of course, but tech people generally don't. Putting ethics ahead of profit would betray your investors, or your shareholders.

If harassment succeeds because there's an economic incentive to let it pass through your network, we have to fight it economically as well. Moralizing about why you shouldn't let your platform enable harassment won't help, since the platform owners have no shame.

Creating these incentives matters. Currently, there's a world-writeable database with everyone's names as the keys, with no accounting and no authentication. A few people control it and a few people get the profits. We shrug our shoulders and say "how can we trace the person who injected this piece of false information into the system? There's no way to track people down." But somebody made the decision to build a system in which people can speak with no incentive to be truthful. Alternative designs are possible.

Autonomous Cars, Autonomous Code

Another reason why there's so little economic incentive to control libel is that the public has a sort of learned helplessness about algorithms... at least when it's "just" information that those algorithms manipulate. We wouldn't ask why a search engine returns the top results that it returns for a particular query (unless we study information retrieval), because we assume that algorithms are objective and neutral, that they don't reproduce the biases of the humans who built them.

In part 2, I talked about why "it's just an algorithm" isn't a valid answer to questions about the design choices that underlie algorithms. We recognize this better for algorithms that aren't purely about producing and consuming information. We recognize that despite being controlled by algorithms, self-driving cars have consequences for legal liability. It's easy to empathize with the threat that cars pose to our lives, and we're correctly disturbed by the idea that you or someone you love could be harmed or killed by a robot who can't be held accountable for it. Of course, we know that the people who designed those machines can be held accountable if they create software that accidentally harms people through bugs, or deliberately harms people by design.

Imagine a self-driving car designer who programmed the machines to act in bad faith: for example, to take risks to get the car's passenger to their destination sooner at the potential expense of other people on the road. You wouldn't say "it's just an algorithm, right?" Now, what if people died due to unforeseen consequences of how self-driving car designers wrote their software rather than deliberate malice? You still wouldn't say, "It's just an algorithm, right?" You would hold the software designers liable for their failure to test their work adequately. Clearly, the reason why you would react the same way in the good-faith scenario as in the bad-faith one is the effect of the poor decision, rather than whether the intent was malicious or less careless.

Algorithms that are as autonomous as self-driving cars, and perhaps less transparent, control your reputation. Unlike with self-driving cars, no one is talking about liability for what happens when they turn your reputation into a pile of burning wreckage.

Algorithms are also incredibly flexible and changeable. Changing code requires people to think and to have discussions with each other, but it doesn't require much attention to the laws of physics and other than paying humans for their time, it has little cost. Exploiting the majority's lack of familiarity with code in order to act as if having to modify software is a huge burden is a good way to avoid work, but a bad way to tend the garden of knowledge.

Plausible Deniability

Designers and implementors of information retrieval algorithms, then, enjoy a certain degree of plausible deniability that designers of algorithms to control self-driving cars (or robots or trains or medical devices) do not.

During the AmazonFail incident in which an (apparent) bug in Amazon's search software caused books on GLBT-related topics to be miscategorized as "adult" and hidden from searches, defenders of Amazon cried "It's just an algorithm." The algorithm didn't hate queer people, they said. It wasn't out to get you. It was just a computer doing what it had programmed to do. You can't hold a computer responsible.

"It's just an algorithm" is the natural successor to the magical intent theory of communication. Since your intent cannot be known to someone else (unless you tell them -- but then, you could lie about it), citing your good intent is often an effective way to dodge responsibility for bad actions. Delegating actions to algorithms takes the person out of the picture altogether: if people with power delegate all of their actions to inanimate objects, which lack intentionality, then no one (no one who has power, anyway) has to be responsible for anything.

"It's just an algorithm" is also a shaming mechanism, because it implies that the complainer is naïve enough to think that computers are conscious. But nobody thinks algorithms can be malicious. So saying, "it's just an algorithm, it doesn't mean you harm" is a response to something nobody said. Rather, when we complain about the outcomes of algorithms, we complain about a choice that was made by not making a choice. In the context of this article, it's the choice to not design systems with an eye towards their potential use for harassment and defamation and possible ways to mitigate those risks. People make this decision all the time, over and over, including for systems being designed today -- when there's enough past experience that everybody ought to know better.

Plausible deniability matters because it provides the moral escape hatch from responsibility for defamation campaigns, on the part of people who own search engines and social media sites. (There's also a legal escape hatch from responsibility, at least in the US: CDA Section 230, which shields every "provider or user of an interactive computer service" from liability for "any information provided by another information content provider.") Plausible deniability is the escape hatch, and advertising is the economic incentive to use that escape hatch. Combined with algorithm opacity, they create a powerful set of incentives for online service providers to profit from defamation campaigns. Anything that attracts attention to a Web site (and, therefore, to the advertisements on it) is worth boosting. Since there are no penalties for boosting harmful, false information, search and recommendation algorithms are amplifiers of false information by design -- there was never any reason to design them not to elevate false but provocative content.

Transparency

I've shown that information retrieval algorithms tend to be bad at limiting the spread of false information because doing the work to curb defamation can't be easily monetized, and because people have low expectations for software and don't hold its creators responsible for their actions. A third reason is that the lack of visibility of the internals of large systems has a chilling effect on public criticism of them.

Plausible deniability and algorithmic opacity go hand in hand. In "Why Algorithm Transparency is Vital to the Future of Thinking", Rachel Shadoan explains in detail what it means for algorithms to be transparent or opaque. The information retrieval algorithms I've been talking about are opaque. Indeed, we're so used to centralized control of search engines and databases that it's hard for them to imagine them being otherwise.

"In the current internet ecosystem, we–the users–are not customers. We are product, packaged and sold to advertisers for the benefit of shareholders. This, in combination with the opacity of the algorithms that facilitate these services, creates an incentive structure where our ability to access information can easily fall prey to a company’s desire for profit."
-- Rachel Shadoan
In an interview, Chelsea Manning commented on this problem as well:
"Algorithms are used to try and find connections among the incomprehensible 'big data' pools that we now gather regularly. Like a scalpel, they're supposed to slice through the data and surgically extract an answer or a prediction to a very narrow question of our choosing—such as which neighborhood to put more police resources into, where terrorists are likely to be hiding, or which potential loan recipients are most likely to default. But—and we often forget this—these algorithms are limited to determining the likelihood or chance based on a correlation, and are not a foregone conclusion. They are also based on the biases created by the algorithm's developer....

These algorithms are even more dangerous when they happen to be proprietary 'black boxes.' This means they cannot be examined by the public. Flaws in algorithms, concerning criminal justice, voting, or military and intelligence, can drastically affect huge populations in our society. Yet, since they are not made open to the public, we often have no idea whether or not they are behaving fairly, and not creating unintended consequences—let alone deliberate and malicious consequences."
-- Chelsea Manning, BoingBoing interview by Cory Doctorow

Opacity results from the ownership of search technology by a few private companies, and their desire not to share their intellectual property. If users were the customers of companies like Google, there would be more of an incentive to design algorithms that use heuristics to detect false information that damages people's credibility. Because advertisers are the customers, and because defamation generally doesn't affect advertisers negatively (unless the advertiser itself is being defamed), there is no economic incentive to do this work. And because people don't understand how algorithms work, and couldn't understand any of the search engines they used even if they wanted to (since the code is closed-source), it's much easier for them to accept the spread of false information as an inevitable consequence of technological progress.

Manning's comments, especially, show why the three problems of economic incentives, plausible deniability, and opacity are interconnected. Economics give Internet companies a reason to distribute false information. Plausible deniability means that the people who own those companies can dodge any blame or shame by assigning fault to the algorithms. And opacity means nobody can ask for the people who design and implement the algorithms to do better, because you can't critique the algorithm if you can't see the source code in the first place.

It doesn't have to be this way. In part 4, I'll suggest a few possibilities for making the Internet a more trustworthy, accountable, and humane medium.

To be continued.


Do you like this post? Support me on Patreon and help me write more like it.