![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
This is the second post in a two-part series. The first part is here.
In a software system, the trusted computing base is that portion of software that hasn't been formally verified as correct. For the purposes of this analogy, it's not important what "formally verified" means, just that there is a way to determine whether something has been verified or not -- often, "verified" means automatically checked by a machine. If you have software that verifies other software, you might ask who verifies the verifier. Ultimately, there's always some piece of code that's at the bottom -- you can't have turtles all the way down. That code has to be reviewed by people to increase the likelihood that it's correct. Of course, people can make mistakes and it's always possible that people will fail to spot errors in it -- but the more people review it carefully, the more confident we can be that it's correct.
Moreover, the smaller the amount of code that has to be verified in this exacting way, the more confidence we can have that the whole system is reliable, even though we can never be totally sure that a system is free of errors. When people interested in software quality talk about making the trusted computing base smaller, this is what they mean. People make mistakes, so it's best to have computers (who don't get bored) do the tedious work of checking for errors, and limit the amount of work that fallible humans have to do.
People who understand the imperative to keep the trusted computing base small nevertheless, sometimes, fail to see that social interactions follow a similar principle. In the absence of a formal code of conduct, when you join a group you have to trust that everybody in that group will respect you and treat you fairly. Codes of conduct don't prevent people from harming you, but they do grant increased assurance that if somebody does, there will be consequences for it, and that if you express your concerns to other people in the group, they will take your concerns seriously. When there is a code of conduct, you still have to trust the people in charge of enforcing it to enforce it fairly and humanely. But if you disagree with their actions, you have a document to point to in order to explain why. In the absence of a code of conduct, you instead have to argue with them about whether somebody was or was not being a dick. Such arguments are subjective and unlikely to generate more light than heat. It saves time and energy to be explicit about what we mean by not being a dick. And that, in turn, minimizes work for people joining the group. They just have to review your code of conduct and determine whether they think you will enforce it, rather than reviewing the character of every single person in the group.
It's clear that nerds don't trust a rule like "don't be a dick" when they think it matters. Open-source or free software project maintainers wouldn't replace the GPL or the BSD license with a text file consisting of the words "Don't be a dick." If "don't be a dick" is a good enough substitute for a code of conduct, why can't we release code under a "be excellent to each other" license? Licenses exist because if someone misuses your software and you want to sue them in order to discourage such behavior in the future, you need a document to show the lawyers to prove that somebody violated a contract. They also exist so that people can write open-source software while feeling confident that their work won't be exploited for purposes they disagree with (producing closed-source software). A "don't be a dick" license wouldn't serve these purposes. And a "don't be a dick" code of conduct doesn't serve the purpose of making people feel safe or comfortable in a particular group.
When do you choose to exercise your freedom to be yourself? When do you choose to exercise your freedom to restrain yourself in order to promote equality for other people? "Don't be a dick" offers no answer to these questions. What guidance does "don't be a dick" give me if I want to make dirty jokes in a group of people I'm not intimate with -- co-workers, perhaps? If I take "don't be a dick" to mean they should trust me that I don't intend to be a dick, then I should go ahead and do it, right? But what if I make somebody uncomfortable? Is it their fault, because they failed to trust me enough to believe that my intent was to have a bit of fun? Or was it my fault, for failing to consider that regardless of my true intent, somebody else might not give me to benefit of the doubt? If, rather than not being a dick, I make a commitment to try as hard as I can to take context into account before speaking, and consider how I sound to other people, I might choose to self-censor. I don't know another way to coexist with other people without constantly violating their boundaries. This requires sensitivity and the ability to treat people as individuals, rather than commitment to a fixed code of behavior whose existence "don't be a dick" implies.
I wrote about the idea of "not censoring yourself" before, and noted how saying everything that comes into your head isn't compatible with respecting other people, in "Self-Censorship". If I censor myself sometimes, in different ways depending on what context I'm in, am I failing to be my entire self? Maybe -- or maybe, as I suggested before, I don't have a single "true self" and who I am is context-dependent. And maybe there's nothing wrong with that.
Part of what politics are about is who gets accorded the benefit of the doubt and who gets denied it. For example, when a woman accuses a man of raping her, there's an overwhelming tendency to disbelieve her, which is often expressed as "giving the man the benefit of the doubt" or considering him "innocent until proven guilty." But there is really no neutral choice: either one believes the woman who says she was raped is telling the truth, or believes that she is lying. You can give the benefit of the doubt to the accused and assume he's innocent, or give the benefit of the doubt to the accuser and assume that she would only make such a serious accusation if it's true. When you encourage people to accord others the "benefit of the doubt", you're encouraging them to exercise unconscious bias, because according some people the benefit of the doubt means withholding it from others. In many situations, it's not possible for everybody to be acting in good faith.
Maybe we shouldn't be surprised that in an industry largely built on finding ways to deliver a broader audience to advertisers, which nonetheless bills itself as driven by "innovation" and "making the world a better place", doublespeak is so widespread. And advertising-funded companies are ultimately driven by that -- every thing they do is about delivering more eyeballs to advertisers. If some of the things they do happen to make people's lives better, that's an accident. A company that did otherwise would be breaching their obligations to stockholders or investors.
Likewise, maybe we also shouldn't be surprised that in an industry built on the rhetoric of "rock star" engineers, the baseline assumption is that encouraging everybody to be an individual will result in everybody being able to be their best self. Sometimes, you need choral singers, not rock stars. It might feel good to sing a solo, but often, it's necessary to blend your voice with the rest of the choir. That is, in order to create an environment where it's safe for people to do their best, you need to be attuned to social cues and adjust your behavior to match social norms -- or to consciously act against those norms when it would be better to discard them and build new ones.
Both "be yourself" and "don't be a dick" smack of "there are rules, but we won't tell you what they are." At work, you probably signed an employment agreement. In life, there are consequences if you violate laws. And there are also consequences if you violate norms. "Being yourself" always has limits, and being told to be your entire self tells you nothing about what those limits are. Likewise, "don't be a dick" and its attendant refusal to codify community standards of behavior signifies unwillingness to help newcomers integrate into a community and to help preserve the good things about its culture while also preserving space to be themselves while respecting others.
When you refuse to tell somebody the rules, you're setting them up for failure, because breaking unwritten news is usually punished quietly, through social isolation or rejection. The absence of rules is effectively a threat that even if you want to do your best, you could be excluded at any time for violating a norm you didn't know existed. (Also see "The Tyranny of Structurelessness" by Jo Freeman.)
So instead of instructing people to "bring your whole self to work", we could say what is welcome in the office -- ideas, collaboration, respect -- and what should be left at the door -- contempt for other people's chosen programming languages, text editors, belief systems, or dietary habits; exclusive behavior; and marginalizing language. Instead of telling people not to be a dick, we could work together to write down what our communities expect from people. And instead of preaching about changing the world, we could admit that when we work for corporations, we're obligated above all to maximize value for the people who own them.
Saying things you hope are true doesn't make them true. Insisting that your environment is safe for everybody, or that everybody already knows how to not be a dick, doesn't create safety or teach respect, anymore than claiming to be a "10x engineer" makes you one. Inclusion requires showing, not telling.
Do you like this post? Support me on Patreon and help me write more like it.
Shrinking the Social Trusted Computing Base
In a software system, the trusted computing base is that portion of software that hasn't been formally verified as correct. For the purposes of this analogy, it's not important what "formally verified" means, just that there is a way to determine whether something has been verified or not -- often, "verified" means automatically checked by a machine. If you have software that verifies other software, you might ask who verifies the verifier. Ultimately, there's always some piece of code that's at the bottom -- you can't have turtles all the way down. That code has to be reviewed by people to increase the likelihood that it's correct. Of course, people can make mistakes and it's always possible that people will fail to spot errors in it -- but the more people review it carefully, the more confident we can be that it's correct.
Moreover, the smaller the amount of code that has to be verified in this exacting way, the more confidence we can have that the whole system is reliable, even though we can never be totally sure that a system is free of errors. When people interested in software quality talk about making the trusted computing base smaller, this is what they mean. People make mistakes, so it's best to have computers (who don't get bored) do the tedious work of checking for errors, and limit the amount of work that fallible humans have to do.
People who understand the imperative to keep the trusted computing base small nevertheless, sometimes, fail to see that social interactions follow a similar principle. In the absence of a formal code of conduct, when you join a group you have to trust that everybody in that group will respect you and treat you fairly. Codes of conduct don't prevent people from harming you, but they do grant increased assurance that if somebody does, there will be consequences for it, and that if you express your concerns to other people in the group, they will take your concerns seriously. When there is a code of conduct, you still have to trust the people in charge of enforcing it to enforce it fairly and humanely. But if you disagree with their actions, you have a document to point to in order to explain why. In the absence of a code of conduct, you instead have to argue with them about whether somebody was or was not being a dick. Such arguments are subjective and unlikely to generate more light than heat. It saves time and energy to be explicit about what we mean by not being a dick. And that, in turn, minimizes work for people joining the group. They just have to review your code of conduct and determine whether they think you will enforce it, rather than reviewing the character of every single person in the group.
It's clear that nerds don't trust a rule like "don't be a dick" when they think it matters. Open-source or free software project maintainers wouldn't replace the GPL or the BSD license with a text file consisting of the words "Don't be a dick." If "don't be a dick" is a good enough substitute for a code of conduct, why can't we release code under a "be excellent to each other" license? Licenses exist because if someone misuses your software and you want to sue them in order to discourage such behavior in the future, you need a document to show the lawyers to prove that somebody violated a contract. They also exist so that people can write open-source software while feeling confident that their work won't be exploited for purposes they disagree with (producing closed-source software). A "don't be a dick" license wouldn't serve these purposes. And a "don't be a dick" code of conduct doesn't serve the purpose of making people feel safe or comfortable in a particular group.
When do you choose to exercise your freedom to be yourself? When do you choose to exercise your freedom to restrain yourself in order to promote equality for other people? "Don't be a dick" offers no answer to these questions. What guidance does "don't be a dick" give me if I want to make dirty jokes in a group of people I'm not intimate with -- co-workers, perhaps? If I take "don't be a dick" to mean they should trust me that I don't intend to be a dick, then I should go ahead and do it, right? But what if I make somebody uncomfortable? Is it their fault, because they failed to trust me enough to believe that my intent was to have a bit of fun? Or was it my fault, for failing to consider that regardless of my true intent, somebody else might not give me to benefit of the doubt? If, rather than not being a dick, I make a commitment to try as hard as I can to take context into account before speaking, and consider how I sound to other people, I might choose to self-censor. I don't know another way to coexist with other people without constantly violating their boundaries. This requires sensitivity and the ability to treat people as individuals, rather than commitment to a fixed code of behavior whose existence "don't be a dick" implies.
I wrote about the idea of "not censoring yourself" before, and noted how saying everything that comes into your head isn't compatible with respecting other people, in "Self-Censorship". If I censor myself sometimes, in different ways depending on what context I'm in, am I failing to be my entire self? Maybe -- or maybe, as I suggested before, I don't have a single "true self" and who I am is context-dependent. And maybe there's nothing wrong with that.
Part of what politics are about is who gets accorded the benefit of the doubt and who gets denied it. For example, when a woman accuses a man of raping her, there's an overwhelming tendency to disbelieve her, which is often expressed as "giving the man the benefit of the doubt" or considering him "innocent until proven guilty." But there is really no neutral choice: either one believes the woman who says she was raped is telling the truth, or believes that she is lying. You can give the benefit of the doubt to the accused and assume he's innocent, or give the benefit of the doubt to the accuser and assume that she would only make such a serious accusation if it's true. When you encourage people to accord others the "benefit of the doubt", you're encouraging them to exercise unconscious bias, because according some people the benefit of the doubt means withholding it from others. In many situations, it's not possible for everybody to be acting in good faith.
Resisting Doublespeak
Maybe we shouldn't be surprised that in an industry largely built on finding ways to deliver a broader audience to advertisers, which nonetheless bills itself as driven by "innovation" and "making the world a better place", doublespeak is so widespread. And advertising-funded companies are ultimately driven by that -- every thing they do is about delivering more eyeballs to advertisers. If some of the things they do happen to make people's lives better, that's an accident. A company that did otherwise would be breaching their obligations to stockholders or investors.
Likewise, maybe we also shouldn't be surprised that in an industry built on the rhetoric of "rock star" engineers, the baseline assumption is that encouraging everybody to be an individual will result in everybody being able to be their best self. Sometimes, you need choral singers, not rock stars. It might feel good to sing a solo, but often, it's necessary to blend your voice with the rest of the choir. That is, in order to create an environment where it's safe for people to do their best, you need to be attuned to social cues and adjust your behavior to match social norms -- or to consciously act against those norms when it would be better to discard them and build new ones.
Both "be yourself" and "don't be a dick" smack of "there are rules, but we won't tell you what they are." At work, you probably signed an employment agreement. In life, there are consequences if you violate laws. And there are also consequences if you violate norms. "Being yourself" always has limits, and being told to be your entire self tells you nothing about what those limits are. Likewise, "don't be a dick" and its attendant refusal to codify community standards of behavior signifies unwillingness to help newcomers integrate into a community and to help preserve the good things about its culture while also preserving space to be themselves while respecting others.
When you refuse to tell somebody the rules, you're setting them up for failure, because breaking unwritten news is usually punished quietly, through social isolation or rejection. The absence of rules is effectively a threat that even if you want to do your best, you could be excluded at any time for violating a norm you didn't know existed. (Also see "The Tyranny of Structurelessness" by Jo Freeman.)
So instead of instructing people to "bring your whole self to work", we could say what is welcome in the office -- ideas, collaboration, respect -- and what should be left at the door -- contempt for other people's chosen programming languages, text editors, belief systems, or dietary habits; exclusive behavior; and marginalizing language. Instead of telling people not to be a dick, we could work together to write down what our communities expect from people. And instead of preaching about changing the world, we could admit that when we work for corporations, we're obligated above all to maximize value for the people who own them.
Saying things you hope are true doesn't make them true. Insisting that your environment is safe for everybody, or that everybody already knows how to not be a dick, doesn't create safety or teach respect, anymore than claiming to be a "10x engineer" makes you one. Inclusion requires showing, not telling.
Do you like this post? Support me on Patreon and help me write more like it.
Kudos & questions
Date: 2015-12-27 02:51 pm (UTC)You're right that "suspension of disbelief" is a see-saw. In our current system, the "trusted code base" is attorneys, juries, judges; they have proved that unconscious bias many times. However, I do believe it's possible to suspend belief; to accept that every person knows reality differently, including and especially the judges.
In the world we hope to make, we will have to transform that binary frame into a more workable justice system which cares about real justice for both target & perpetrators.