tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Long overdue, here are my notes on the talks at CUFP 2014 (September 6, 2014). This is the last in a series of conference write-up posts from me that cover CUFP, the Haskell Symposium, Erlang Workshop, and the three days of ICFP itself. CUFP is the workshop for Commercial Users of Functional Programming, and I was honored to have served on the program committee for it this year.

Joe Armstrong's invited talk, "Making Money with FP", was quite entertaining... for the most part anyway. His comment that you can't sell a language, and must sell a project written in it, harked back for me to working at Laszlo Systems in 2005.

He made the point, about adoption of FP, that "nobody ever got sacked for using Microsoft products (or Java, or C++" -- also this gem, "You get paid based on the number of people you manage, so people hate the idea that ten Haskell programmers can do what 100 C++ programmers can do." (I'm not confident that that generalization always holds, but it does seem to be true in my experience.)

One aside that marred an otherwise great talk was an unnecessary use of "guys" on a slide, when Armstrong said (while speaking to the same slide) "technical guys enjoy an argument". One or the other and I might have let it slide, but not all "technical guys" enjoy an argument, plus technical women who enjoy arguments are punished for that while technical women who don't enjoy arguments tend to get steamrolled.

Then, Armstrong went on to talk about different business models for making money from FP. Most of this advice seemed broadly applicable, but it was still good to hear it coming from one of the people who is most qualified to talk about "how to make money with FP". He implied, I think, that the best two routes for a person trying to get into business with FP were either a consultancy (where you are an independent businessperson who sells consulting hours or services to other companies) or a development/R&D company where the goal is to "develop a product and sell it to a bigger company that can sell it." He explained how a good way to gain a reputation is to participate in the standardization of a language or framework: either choose a new standard or invent one of your own, and then make the best and first implementation. Then, you sell or give away software to build your reputation (which is why you can't sell a language, I guess!) and finally, sell the company :D
Read more... )
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Previous notes: ICFP, days 1, 2, 3.

These notes are about Friday, September 5. Thursday, I missed the whole day of conferencing and only went to the industrial reception in the evening. I hadn't planned to go to many talks on Thursday anyway, but I ended up spending Wednesday night (all of it) in the ER at Östra Sjukhuset, since I didn't know how else to get a relatively minor thing requiring antibiotics treated at night. (Since I didn't get seen till 5 AM, I should have just waited till the next morning!) So that meant sleeping till about 3 on Thursday.

On Friday, I'd been planning to mostly go to the Erlang Workshop and drop in on a Haskell Symposium talk or two, but I ended up doing the opposite, oops. Somehow, I always end up going to the "Future of Haskell" discussion, even in years when I'm not doing Haskell. I already talked about the discussion a little bit in my Ada Initiative fundraiser post; what Wouter said about being encouraging to newcomers was part of it. During the discussion part, somebody stood up and said "well, as far as I'm concerned, everything's fine because the Haskell community has been friendly to me and I've had a good experience." I'm probably being a bit unfair to him, but certainly the implication was that his good experience was a sign there was no problem, even if he didn't say so explicitly. I stood up and pointed out that the people we need to listen to are the ones who aren't in the room -- presumably, everyone who was attending the Haskell Symposium was there because they had a good experience with the community. I don't think I exactly said so, but further compounding things is that some of the specific people who have been particularly hostile to novices were in the room. If you have no idea what I'm talking about at this point, Gershom's "Letter to a Young Haskell Enthusiast" should be a good start. I'm motivated here by having friends who wanted to learn Haskell but gave up because people were hostile to them, and I hate that the response to that is for people to say their experiences were good -- that doesn't give me anything to tell my friends.

I also touched on this in my fundraiser post, but the fact that on day 2 of the Haskell Symposium, there were maybe 100 people in the room and (as far as I could tell) none of them were women, by itself, indicates that there's a problem. Arguments that women just aren't interested in Haskell, or aren't good at it, or prefer to do real-world things where they get to help people (unlike teaching and research, I guess?) have so little merit that they're not worth discussing. I know that the lack of women in the room was for a reason, which is that largely, women aren't finding the Haskell community -- or the functional programming community, or communities around programming language theory and practice more broadly -- welcoming. The fact that there are a few exceptions means that a few women who have an exceptional level of interest, talent, and (for some, anyway) privileges along axes other than gender have been able to make it. They should be listened to for the same reason that someone who runs marathons while wearing a 100-pound backpack knows more about running marathons than someone who runs marathons unladen. But it's not enough for a few exceptional women to be allowed in -- to paraphrase Bella Abzug, equality doesn't mean access only for the exceptional women, but for mediocre women to do as well as mediocre men.

Wouter made a comment about "encouraging women", and while we should, I wish people would spend less time saying "encourage women", and more time saying "don't be a jerk". Of course, neither imperative means much without further detail. As Gershom's letter reflects (indirectly), when I say that the community is unwelcoming to women, it's often not about overt sexism (though there is some of that), but rather, a very popular aggressive, adversarial, confrontational teaching style that many people apply to a broad range of interactions besides those that are understood as teaching situations. And it's not that women don't like to be or don't want to be aggressive -- it's that they know from lived experience that being aggressive and adversarial with men has consequences, and not good ones. This is the double bind: there is a very narrow range of allowable behavior for women in any grossly male-dominated subculture, and a very wide range for men. So besides just "encouraging women", men also need to approach intellectual conversations in ways that aren't about showing dominance... even when they're only talking to each other.

Here's the video of the entire discussion, which I think includes all the audience comments.

After lunch, I went to some of the Erlang Workshop talks. The first one was Amir Ghaffari on "Investigating the Scalability of Distributed Erlang". The talk didn't spend much time introducing Distributed Erlang itself, but rather, focused on running DE-Bench (the benchmark suite for it) on different number of nodes to see how well Distributed Erlang scales. DE-Bench lets you run tests synchronously or asynchronously, on one or all nodes. Ghaffari found that throughput drops off dramatically if you start using global operations (which, intuitively, isn't surprising). He also found that latency for global commands grows linearly in the number of nodes -- I wasn't so sure why that was true. He found that Riak didn't scale past 70 nodes at all -- at that point, an overloaded server process became the bottleneck. He concluded that everything about Distributed Erlang scales well except for RPC; not being familiar with Distributed Erlang, I'm not sure how much of a problem that is.

In the next talk in the same session, Chris Meiklejohn spoke on dataflow-deterministic distributed dataflow programming for Erlang. The talk was about using Erlang to build a reference implementation for CRDTs, as well as building a new language (that is, a subset of Erlang) for CRDTs. It was cool that Meiklejohn was implementing some ideas from Lindsey's work, both because I know Lindsey and because it was something I'd heard about before, so I had at least a moment where I got to feel smart ;)

Meiklejohn and colleagues' system, DerFlow, is state-based; CRDTs require something to grow monotonically, and in this case, it's state. Meiklejohn pointed out that for distributed systems, this is great, because unreliable networks mean that packets could get dropped, but never cause already-transmitted data to be forgotten. Proving correctness means proving properties over the lattice of choices; a choice is a particular sequence of messages received by a program. Their memory model is a single-assignment store: any given memory location goes from null (no binding), to variable (which means it's "partially bound"), to value (bound). If one node asks for something unbound, it will wait until it becomes bound -- so, deadlock can happen. I'm handwaving a bit in my explanation here, but fortunately, you can go look at the code yourself on Github!

Finally, I went to one last Haskell Symposium talk that I wouldn't have gone to if Ed Kmett hadn't recommended it, and indeed, it was worth going to: Atze van der Ploeg's "Reflection Without Remorse". The talk was cool, but at this point my brain was pretty fried and I bet the paper will be even cooler. van der Ploeg motivated the problem by talking about how a chain of list append operations gets evaluated depending on associativity -- depending on where you parenthesize, one way is a lot more expensive than the other. I think this is the same problem as the infamous foldr/foldl problem. The solution (that makes the cost of evaluation the same regardless of associativity) is to rewrite expressions in CPS form -- this looks to me a lot like the build operation from shortcut deforestation. If I'm not totally lost, I think during this talk, I finally understood why build is called build for the first time (and I did my undergrad and master's theses on shortcut deforestation) -- the build form represents building up a list as a continuation. Then you run into a problem with having to convert between list repreesntations and function representations and vice versa -- I think I actually ran into something similar in my master's thesis work that I didn't quite know how to handle, so if I'm in a mood for revisiting some ancient history, maybe I'll try to figure out if there's really a connection there or not.

So far, apparently, Okasaki already solved the problem -- just for lists -- in Purely Functional Data Structures. But you can actually generalize the problem to all monads, not just lists! That's where it gets really cool. The continuation monad transformer for monads (which I don't understand) is apparently another instance of the same problem. My mind started melting a little at this point, but the upshot is that you can do monadic reflection, where you have interchangeable continuation-based and concrete forms of a data structure and can alternate between building and reflection.

That's all for Friday -- stay tuned for my final ICFP-ish post, in which I'll summarize the CUFP talks.
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
These notes are about Wednesday, September 3.

The first talk I went to was Carlo Angiuli's talk on homotopical patch theory. I understood very little about the actual new work in the talk, but I'm very glad to finally have at least a vague sense of what homotopy type theory is about (though I no longer remember how much of that came from the talk, and how much came from talking to Carlo and to Ed Kmett the day before :) I was about to write down my hand-wavy summary of what I think it's about, but I realized it's too hand-wavy to even write down. But, I want to read more about it, and if you're curious, so can you!

The next talk I went to was Niki Vazou's talk on refinement types for Haskell. Refinement types are cool, but sadly, I lost the thread (totally my fault in this case) somewhere after Vazou said something about using refinement types to prove termination for themselves. At that point, I wrote down an outraged little comment on my notepad that ended with a note to myself to read the paper. The other thing about this talk that I noted, which I hate to mention -- but really, what I hate is that it's even noteworthy at all -- is that during the Q&A period, a woman asked a question at a talk given by a different woman. This was the 10th ICFP I've attended, and I'm pretty sure this was the first time I've seen that happen at ICFP.

Then I missed most of Conor McBride's talk "How To Keep Your Neighbors in order" indirectly due to listening to Ed tell his (edited, I'm sure) life story. If you get the chance, you should ask Ed his life story; he may be the closest person to a character in a Hunter S. Thompson book who you're likely to meet at a computer science conference.

Next (for me) was Simon Marlow's talk "There is no Fork: an Abstraction for Efficient, Concurrent, and Concise Data Access", which was probably the best talk title of ICFP 2014. Simon talked about his work on Haxl, motivated by wanting an implicitly concurrent language for fetching data incrementally and lazily. This reminded me a bit of what I overheard when I was working on Rust about Servo's incremental layout, but I don't remember it well enough to know if that's a red herring or not. I'll be interested to read the paper and see if there's any comparison with Erlang, as well.

Jeremy Gibbons began his talk "Folding Domain-Specific Languages: Deep and Shallow Embeddings by saying that his co-author Nicolas Wu couldn't be there because "he has taken delivery of a new baby". Which was funny, but possibly took someone else out of the picture a bit ;) The talk was helpful to me since I spent four years at Portland State hearing people talking about deep and shallow embeddings without knowing what that meant, and now I do. Deep embeddings are syntax-driven and shallow embeddings are semantics-driven (unless it's the opposite); in a shallow embedding, operations are functions in the host language and in a deep embedding, operations are types in the host language (ditto). It's a similar dichotomy to the expression problem. I wrote in my notes "Somehow you can turn context-sensitive interpretations into compositional ones (read the paper)". At that point, I was literally too tired to stand up, so I'm just pleased with myself for having remembered this much!
tim: A person with multicolored hair holding a sign that says "Binaries Are For Computers" with rainbow-colored letters (binaries)
These notes are about Tuesday, September 2.

I caught the end of Robby Findler's invited talk on behavioral software contracts. That was enough to catch a point that I found thought-provoking: that contracts aren't a subset of types, because contracts can express protocol-based properties (similarly to how session types do), which fundamentally involve assignment. I'm still mulling it over, and I should probably just watch the whole talk, but it might be the answer to a question that has plagued me for years, which is: "are contracts just type signatures that you put in a comment?" (Not meaning to participate in a holy war here -- I assume the problem is my lack of understanding.)

If that's true, it reminds me of typestate in Rust, which I implemented for my intern project and which was later removed from the language. Or, maybe, Rust's typestate wasn't as powerful as contracts are, and that's why people didn't find it useful in practice. I do remember always being really confused about the interaction between typestate and assignment -- we went back and forth between thinking that typestate predicates should only be able to refer to immutable variables, and thinking that we'd take the YOLO approach and leave it as a proof obligation for the user that mutation can't cause unsoundness. So maybe if I had understood contracts at the time, the whole thing would have gone better. In any case, I'd like to read more so that I can articulate the difference between typestate and contracts.

I caught slightly more of David Van Horn's talk on soft contract verification, though I missed part of that talk too. The principle here is to allow assigning blame when an assertion fails at runtime: then, you can write your code so as to have strong enough contracts so that your code is to blame as infrequently as possible when something goes wrong (if I understood correctly, anyway). ("Blame" is a technical term introduced by Dimoulas, Findler, Flanagan, and Felleisen, at least if I have the correct initial reference.) As in soft typing, "soft" here means that the contract checker never rejects a program -- it just introduces runtime checks when it can't convince itself of a particular safety property at compile time. This also recalls Rust typestate for me, which had a very similar approach of falling back to runtime verification (actually, in Rust, all typestate assertions were verified at runtime; we thought that would be a simpler approach, and if the feature had persisted, we might have implemented some sort of analysis pass to eliminate some of the dynamic checks). In my copious free time, I'd love to revisit Rust typestate and compare and contrast it with the work presented in these two talks, as well as gradual typing and effect systems, maybe even as a paper or experience report. (Which, of course, would involve me learning about all of those things.) I want to say that Rust typestate did have an analogous notion to blame: it was all about forcing each function to declare its precondition, so that if that precondition was violated at runtime, we knew it was the caller's fault, not the callee's. But I'd like to read the paper to see how much of a role inference plays.

As a much more trivial aside, I really liked that Van Horn used ⚖ as an operator, at least in the slides (as in, C ⚖ M). There should be more Unicode operators in papers! It's 2014; we don't need to limit ourselves to what was built into a 1990s-era version of LaTeX anymore.

In any case, the parts of Van Horn's and Findler's talks I heard made me think "this is the right way to do what we were trying to do with typestate". I want to be sure I believe that, though. I say this because their approach to handling mutation is to statically check any contracts that don't involve assignment -- other contracts revert to runtime checks, but the checks always happen, either statically or dynamically. My memory is hazy, but in the context of Rust, I think we talked about introducing additional precondition checks at each use of a variable involved in a typestate predicate, but quickly decided that would be inefficient. In any case, those two talks made me want to revisit that work, for the first time in a while!

I missed most of Norman Ramsey's talk "On Teaching How to Design Programs as well, but the paper seems worth reading too. Two things I did catch: Norman saying "Purity has all sorts of wonderful effects" (I think in terms of helping students discipline their thinking and avoid just banging on the keyboard until something works, though I don't recall the context), and him making the point that the HTDP approach makes it easier to grade assignments based on how systematic the student's design is, rather than a laundry list of point deductions.

Next, I went to Richard Eisenberg's talk "Safe Zero-Cost Coercions for Haskell". I have painful memories of this line of work dating back to 2007 and 2008, when I was reviving the GHC External Core front-end and had to figure out how to adapt External Core to represent the new System FC type system features, which (to me at the time) seemed to make the Core type system twice as complicated for unclear benefit. (External Core was removed from GHC some years later, alas.) I'm willing to say at least provisionally, though, that the work Eisenberg presented cleans up the coercion story quite a bit. I also appreciated the motivation he gave for introducing coercions into the type system at all, which I hadn't heard formulated quite like this before: you can typecheck System F just using algebraic reasoning, but when you want to introduce coercions (which you do because of GADTs and newtypes), contravariance ruins everything. I think a way to summarize the problem is that you get overlapping instances, only with type families rather than just type classes.

To solve the problem, Eisenberg and colleagues introduce two different equality relations: nominal ~, and structural ~~. This allows the type system to incorporate coercions based both on nominal type equality, and structural type equality, without having to pick just one. Then, each type parameter gets a "role", which can be either "structural" or "nominal". This allows coercion kinds (my nemesis from the External Core days) to just go away -- although to me, it seems like rather than actually taking coercions out of the kind system, this approach just introduces a second kind system that's orthogonal to the traditional one (albeit a very simple kind system). I guess it's possible that separating out concerns into two different kind systems makes the implementation and/or reasoning simpler; also, as far as I can tell, roles are more restricted than kinds in that there's no role polymorphism. (I'm not sure if there's kind polymorphism, either, although there certainly was in GHC at least at some point.) Actually, there are three roles: "nominal" (meaning "this parameter name matters and is not interchangeable with structurally equal types"), "representational" (for a type that is interchangeable with any others that are structurally equal to it), and "phantom" (type parameters that are unused on the right-hand side of a definition). I wrote in my notes "I wonder if this sheds any light on Rust traits", but right now I'm not going to elaborate on that query!

The implications of the work are that generalized newtype deriving now has a safe implementation; the type system makes it possible to only allow unwrapping when the newtype constructor is in scope. (This, too, reminds me of a Rust bug that persisted for a while having to do with "newtype dereferencing".) The results were that the new role system uncovered three legitimate bugs in libraries on Hackage, so that's pretty cool. Also, Phil Wadler asked a question at the end that began with something like, "...here's how Miranda did it..." (Not something one hears a lot!)

Finally, I stayed for François Pottier's talk "Hindley-Milner Elaboration in Applicative Style", which I understood more than I expected to! He began by saying something that I noticed long ago, but originally chalked up to my own stupidity: Algorithm W in its original presentation, was "imperative, and messy". We want a simpler, more declarative formulation of type inference. Pottier claims that conjunctions and equations are simpler than compositions and substitutions -- I agree, but I'm not sure if that's based on something objective or if that's just what works well for my brain. He defines a constraint language that looks like λ-calculus with existential types, which allows constraint solving to be specified based on rewriting. On paper, it's a declarative specification, but the implementation of it is still imperative (for performance reasons). It sounds like it might be fun to prove that the imperative implementation implements the declarative specification, though perhaps he is already doing that!

Stay tuned for my notes on day 3, when I get around to editing them.
tim: A person with multicolored hair holding a sign that says "Binaries Are For Computers" with rainbow-colored letters (binaries)
During this year's ICFP, I took probably more notes than I've taken at any other conference I've gone to. Now some of my notes were silly ideas or random to-do items that would have distracted me if I didn't write them down, but a lot of them were actually about the talks I was listening to, surprisingly enough!

In the interest of posterity, as well as justifying to my employer why it was a good idea for them to pay $3000 for me to go to an academic conference when I'm not a researcher, I'm going to try to summarize those notes here. What follows is my attempt to turn my notes on the first day (September 1) into something half coherent. I'll do a separate post for each day. I will try to link to the video from each talk.

The first talk of the day that I caught was Daniel Schoepe talking about SELinq. Yes, this is LINQ as in LINQ. He (and his coauthors Daniel Hedin and Andrei Sabelfeld) wanted to build on top of what LINQ already does -- making database queries typed -- to annotate types with "public" or "private". This means, probably, exactly what you'd think it would mean in an everyday sort of database application: for example, they applied their work to an example from the PostgreSQL tutorial site and showed that they could implement a rule that in a database of people, demographic info, and addresses, each person's exact address is private -- so queries can get aggregate data about what regions people live in, but a query that tries to look up an individual's street address would be ill-typed. That's really cool!

Schoepe et al.'s work is based on FlowCaml, which is an information flow analysis for OCaml. Crucially, the work relies on embedding the database query language in the underlying programming language, so you can piggyback on the host language's type system. That's cute! It also relies on baking operations like list append into the language, which is a little sad. (In my never-published -- and never-finished -- dissertation proposal, I wanted to explore using a combination of proofs and types to modularize the primitive operations of a language. I wonder if an approach like that would be useful here.)

They proved a soundness theorem that guarantees non-interference, and implemented a source-to-source compiler that generates F# code. Their type inference is conservative, which doesn't violate any intuitions: that is, it's always safe to treat public data as private. In future work, they want to apply it to front-end JS web apps, and in the question and answer session, Schoepe said it shouldn't be hard to generalize the work to arbitrary lattices (not just the one with "public" and "private").

After lunch, Paul Stansifer talked about binding-safe programming. The take-away line from Paul's talk was "Roses are olfactorily equivalent under α-conversion." (This might make more sense if you keep in mind that the name of the system he implemented is "Romeo".) Everybody knows that naming is one of the hardest problems in computer science; Paul presented a type system for naming. This pleases me, since I think nothing makes sense unless it's formulated as a type system. In his system, types track binding information: specifically, in a way that allows names to be shadowed safely. The type keeps track of which "direction" shadowing should go in. The running example was expanding let* (because this is in a Lisp-ish context) into lambdas.

The next talk I took notes on was by Lars Bergstrom, on higher-order inlining. (Paul and Lars are both former colleagues of mine from Mozilla, by the way, which is why I'm referring to them by their first names.) Lars presented an analysis that determines where it's safe to inline higher-order functions. This was interesting to me since I'd always thought of inlining higher-order functions as a very naughty thing to do; the first research papers I ever read, in undergrad, were about inlining, and I did my undergrad and master's thesis work on deforestation (which depends intimately on it), so this is a belief I've held for a long time! Lars showed that it's wrong, so that was cool. The central contribution of his work is an analysis that doesn't have to compare the entire lexical environment for equality. I would have to read the paper to explain why, but Lars said that his work provides principles that replace the bag of heuristics that was the previous state of the art, and for now, I believe him.

It's safe to inline a higher-order function if you know that its environment doesn't change dynamically: that is, that inlining won't change the meaning of the function's free variables. Lars' analysis answers the question "does any control flow path pass through a sub-graph of the control flow graph that constitutes a re-binding?" An interesting thing that Lars didn't say (but perhaps it's not as deep as I think it is) is that this touches on how closures and value capture are sort of a hidden side effect even in a pure, strict functional program. His conclusion was "You can use closures in benchmarks now!"

In the Q&A, I asked whether it was imaginable to prove that the analysis improves performance, or at least doesn't make it worse -- in other words, that the higher-order inlining optimization has a predictable effect on performance. This question has haunted me ever since I worked on type-based deforestation. Lars thought this would be very ambitious because inlining is often an enabler of other optimizations, rather than improving performance on its own; though it does inherently eliminate stack frame allocation and deallocation. He said that he and colleagues want to do a correctness (i.e. semantic preservation) proof, but haven't yet.

My last set of notes for the day was on Jennifer Hackett's talk "Worker/Wrapper Makes It Faster". As Lars said in answer to my question, Jennifer's work answers the question I asked, sort of: it's about proving that optimization really does improve performance. She has a proof strategy based on defining a "precongruence": a relation that's preserved by substitution; this is because the "natural" approach to proving performance improvement, induction based on execution steps, doesn't work for lazy languages. If a relation is a precongruence, intuitively, a local improvement always translates into a global one. I would have to read the paper before I said any more, but I thought this talk was cool because it contributes the first performance guarantee for an optimization for a lazy language.

Stay tuned for my notes on day 2, once I get around to editing them!
tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)

Make Functional Programming Better by Supporting the Ada Initiative and Petitioning the ACM

Edited to add: we reached our initial $4096 goal in just 5 hours! Can you help us raise it to $8192 and double what we hoped to raise? Edited again to add: We've now exceeded our goal of $8192, six hours before the end of the challenge! Can you help us bring it up to $10,000?

Donation button
Donate to the Ada Initiative

Clément Delafargue, Adam Foltzer, Eric Merritt, Chung-chieh (Ken) Shan, and I are orchestrating a community challenge to both raise money for the Ada Initiative, and make computer science conferences (specifically, the many technical conferences that the ACM (Association for Computing Machinery) organizes) better. We are challenging anybody who identifies as a member of the functional programming community to do two things:

  1. Donate to the Ada Initiative, a nonprofit organization that is working hard to make it broadly possible for women and people in a variety of other marginalized groups to work in technology.
  2. Call on the ACM to consistently publicize their own anti-harassment policy for all its conferences. That is, I'm asking that those -- at least those of you who use Twitter -- tweet a statement like the following one (use your own words, just include the #lambda4ada hashtag and try to include the donation link):

    I donated to @adainitiative b/c I want @TheOfficialACM events to announce their anti-harassment policy. https://supportada.org?campaign=lambda #lambda4ada

Our goal is to raise $4096 $8192 $10,000 for the Ada Initiative by 5:00 PM Pacific time on Friday, September 19. If you use the URL https://supportada.org?campaign=lambda, your donation will count towards the functional programming community challenge and help us reach the $4096 $8192$10,000 goal. I have personally matched the first $1024 of funds raised -- that is to say, I already donate $80 per month to TAI, so over a year, my contributions will add up to $960. On Tuesday, Sept. 16, I donated an additional $64 to round the amount up to $1024. I've spent the past couple years struggling to pay off student loans and medical bills despite being generously compensated for my work -- nevertheless, I support TAI every month because I see it as an investment in my continued ability to work. I hope that my example inspires those who have a bit more financial freedom than I do to donate accordingly.

If you are reading this and you have benefited from your involvement, past or present, with any part of the functional programming community, we need your support. It is up to you how much to give, but we ask that you consider how much you have gained -- materially, intellectually, socially, perhaps even spiritually -- from what you have learned from functional programming and from the people who love it. Particularly if you are currently making your livelihood in a way that would be impossible without the work of many people who have and are making functional programming languages great, consider giving an amount that is commensurate with the gift you have received from the community. If you need a suggested amount and are employed full-time in industry who is using functional programming or doing work that wouldn't be possible if not for the foundations laid by the FP community, $128 seems pretty reasonable to me -- and at that rate, we would just need a total of 32 people to donate in order to reach the goal. I think there are far more people than that who do FP for a living!

If anybody assumed that Clément, Adam, Eric, or Ken endorsed anything in the remainder of this blog post, that assumption would likely be wrong. In what follows, I am speaking only as myself and for myself. I am an employee of Heroku, a Salesforce company, but neither Heroku nor Salesforce endorses any of the following content either. Likewise, I don't necessarily agree with everything that Ken, Eric, Adam, or Clément might say in support of this challenge; we are all individuals who may disagree with each other about many things, but agree on our common goals of supporting the Ada Initiative and raising awareness about the ACM anti-harassment policy.

If you've already gone ahead and made a donation as well as tweeting your support under #lambda4ada, great! You can stop reading here. If you're not sure yet, though, please read further.

Why ICFP Is Fun... For Some

  • Young man, there's no need to feel down
  • I said young man, put that old journal down
  • And come publish at... I - C - F - P
  • It's fun to publish at... I - C - F - P
  •  
  • When your lambda is tight, and your theorems allright
  • You can come, on, down, and publish at... I - C - F - P
  • You know I'm talking 'bout... I - C - F - P
  •  
  • There's a place you can go, and lots of friends that you know, at the I, C, F, P.

          -- Nathan Whitehead, paying homage to The Village People

ICFP, functional programmers' annual "family reunion" (to borrow a phrase from one of the organizer's of this year's ICFP, which took place two weeks ago) feels to me like more than just an academic conference. The lone academic publication that I can claim (second) authorship for appeared in ICFP, but it's more than just the opportunity to hear about new results or share my own that keeps me coming back. Maybe that has something to do with the affiliated annual programming contest, or the copious number of co-located workshops revolving around different language communities, or maybe it's just about folks who know how to keep the "fun" in functional programming. It's a serious academic conference that occasionally features cosplay and [PDF link] once had an accepted paper that was written in the form of a theatrical play.

Putting The Fun Back in Functional Programming and How the Ada Initiative Is Helping Us Do It )

tim: A warning sign with "Danger" in white, superimposed over a red oval on a black rectangle, above text  "MEN EXPLAINING" (mansplaining)
Or: Lessons I Learned From Years Of Flouting Them
Or: Don't Do What I Did

The following is a list of tips derived from what I think has helped me enjoy computer science conferences more (and possibly learn more from them) as time has passed. I don't assume that they will be helpful to anybody else, but perhaps they're worth thinking about! I expect this list will be the most useful for grad students starting out, and other people who haven't gone to conferences before. If you are more experienced, you can always tell me why I'm wrong or whatever.

1. Pace yourself.

Skip talks. No, really. Going to every talk should not be your goal. Most people can't go to every talk and understand everything. (Don't even expect to understand everything in any one talk.)

Try to highlight talks you especially want to go to in the program in advance. You can do this during the first coffee break or during the first talk when you get there halfway through; it's okay :) Be open to adding or dropping talks (adding can be if someone tells you "hey, X's talk is going to be good" or if you happen to see the beginning and are drawn in; dropping can be if you feel tired, want to get some exercise, or get into a good conversation with somebody). I promise you that even if your school or employer is paying, nobody is going to exhaustively quiz you on the contents of every talk when you get home.

2. Pace around.

If it's possible -- not too rude and disruptive given the room layout, and physically possible for you -- try pacing around while you're listening to talks. At ICFP this year, the room had a big space in the back without chairs, which some people used for standing, lying down, doing yoga, and other such things. I don't know if this was intentional, but it worked well. Sitting in the same position all day is not good for most bodies. Don't be afraid to move, stretch, or even sit on the floor or lie down while listening to talks. If you're me (and possibly even if you're not me), this will help you listen better. Just because most people are sitting (too close together, on chairs that are probably uncomfortable) doesn't mean you have to.

Another advantage of standing is that it discourages you from opening your laptop, if that's a compulsion for you.

3. Take notes.

Not everybody focuses better while taking notes, but I certainly do; if my hand isn't moving, my mind checks out. But taking notes does more harm than good unless you do it effectively. it took me years to learn that note-taking isn't about writing down what the speaker says in complete sentences. If you hear something that makes you think, "That's interesting! I wonder...", write it down. If you hear something you want to read more about, write it down. Notes can be illegible to anyone else (so long as you can read them later!), in incomplete sentences, structured as bullet lists, etc. Nobody else gets to see your notes unless you let them.

Sometimes notes are write-only, and that's totally okay. You might never look at them again, but the act of writing will still have helped you remember what you learned.

4. If you don't understand, assume that it's not your fault.

This doesn't mean getting aggro at the speaker because they were unclear. It does mean not bearing all of the blame for every single talk you don't follow. It also means asking questions (sometimes) without thinking it will expose your horrible ignorance. Chances are, if you have a question in mind, ten other people do and won't want to say it. If you ask, you'll be helping all of them.

It's possible that it is your fault, but more often, somebody just didn't put in the time/didn't do practice talks/other things to improve talk quality. At least at the conferences I go to, papers are selected solely based on the quality of the ideas and writing, not the talk (since when the authors submit the paper, they haven't prepared the talk yet!) Someone can write a great paper with a great ideas, but still have no idea how to organize slides visually or structure a talk. The academic system affords very few incentives to learn how to do that, other than an individual's intrinsic motivation and/or peer pressure.

5. If you can't pay attention to the content, critique style -- INSIDE YOUR HEAD.

I mean, it's educational for you to think about what methods do and don't work for slides ("wow, that hot pink background with white text is hard to read..." "wow, I don't like Comic Sans and only SPJ gets a pass"), but just to be clear, nobody else (especially not the speaker) wants to hear your bikeshedding. That said, I find this is a way for me to actually get more out of the talk content, because if I'm noticing how I could have done the talk better from a purely visual POV, I'm not thinking about how much of a doof I am for not understanding the content.

6. If possible, stay physically nearby.

At least at the conferences I go to, the conference is usually at a hotel, and you can also stay at the hotel, though the hotel the conference is in is usually outrageously expensive (not an issue if your research grant or company gives you an unlimited budget, but for grad students, faculty at small schools, and unaffiliated people, that can be a problem). That means you can theoretically travel to an exciting, cosmpolitan city for a conference, and never leave the hotel except to go back and forth to the airport (if you're willing to eat hotel restaurant food). The drawback is that there are usually much cheaper options, but generally a significant distance from the conference. It's up to you to set your own budget priorities, but even though I wish they weren't so exorbitantly priced, there really are advantages to staying in the same hotel as the conference. This is true even in European cities where you can walk or take an easy light rail ride everywhere -- the time it takes will add up, and you're spending enough time attending, going to dinner, and staying out late shooting the breeze that every minute counts (and sleep is crucial to everything else working out).

Staying at the hotel also makes it easier to show up on time for the first talk in the morning, which saves you guilt about missing it (especially if the conference puts invited talks first, which is cruel and unusual punishment if you ask me -- signalling that a talk is expected to be especially good by scheduling it at a time when it's difficult for many of us to be awake). It also makes it a snap to go back to your room for a nap, break, or just some alone time when you need it.

7. Know your limits.

I don't mean alcohol so much as people and new information. It's okay to tell yourself that your brain is full and go take a break. (Taking notes makes this easier, since you know you'll be able to resume easily.) This is true whether you're an introvert, extrovert, or the (probably-majority of us) who don't fit neatly into one of those categories. The limits may vary wildly for different people, but almost everyone has them. When you hit your limit, you'll know.

8. Ask questions.

Many conferences have a few people who seem to dominate the Q&A sessions for almost every talk. Session chairs usually know this, and some will try to call on less familiar faces. But for that to work, people have to step up. So every question you ask -- as an outsider, newcomer, or whatever -- means that many more fresh perspectives that the whole conference gets to hear.

Often, not everybody gets to talk in a given Q&A session, but it's okay and encouraged to approach a speaker later and say you liked their talk and are wondering about ____. This is also totally okay if you're just too intimidated to ask a question in front of a large group. Personally, when I've given talks and no one has said a word to me about it later -- or if all anyone says is the equivalent of "great talk!" -- I worry.

9. Know how talks get selected.

At least at the academic conferences I go to, program committees don't select talks based on presentation quality, because they don't get to see the talks first or figure out how good a speaker the presenter is (in fact, often they don't know who will speak, because papers usually have multiple authors and only one will give the talk.) They select talks based on their assessment of the quality of the papers that go with them. Selection also isn't an objective process; political, messy, human one (just ask anyone who's been on a PC). Inclusion in a given conference, even a conference with a good reputation, doesn't imply lasting value. Rejection doesn't imply absence of value.

I'm saying this to encourage you to go easy on yourself if you miss talks or don't get much out of one or many talks. It doesn't necessarily mean that you had a great opportunity to learn something, and you (and only you) squandered it. When choosing talks to go to -- or choosing how hard to listen! -- trust your own judgment and don't assume everything is a pearl of wisdom.

10. Know that sometimes a great idea is buried in a bad talk.

Even if a talk leaves you reeling and not in a good way, maybe it just means you should read the paper. Different people learn differently, but for many of us, it's easier to understand something when we can go back and read the same sentence six times before continuing. You can achieve something similar by re-watching the video (if you're at a conference that records talks) later, which also has the advantage that you can rewind parts you want to listen to again and fast-forward through parts you don't. All of this only applies if the idea actually interests you. There's no obligation. In my experience, the most common scenario is a terrible talk based on an alternately lucid and confusing paper about a cool idea.

Saying No

May. 1st, 2014 10:44 am
tim: A warning sign with "Danger" in white, superimposed over a red oval on a black rectangle, above text  "MEN EXPLAINING" (mansplaining)
In the past week, I had two talk proposals accepted: one for LambdaJam in July, and one for Open Source Bridge in June. I ended up declining to give both of them. This was hard for me.

I like giving talks. I don't have any stage fright. I've been told I give good talks. In my field, good speakers aren't very common, but the few good talks I've seen make me want to go do the same thing, in a way that almost nothing else does. I like the performance aspect of it, and it makes being at a conference make sense (I always feel vaguely awkward when someone asks me if I have a talk and I say no.)

What I don't like is preparing talks. I don't see a way around this. It's not like anyone else can do it for me. I think it's because of how feedback works -- I get feedback at the very end, after I give a talk, but it's very hard to get any feedback on intermediate products, and when something isn't closely coupled with my job, I don't really have an audience for a practice talk. Even if I do a practice talk, that's after I've prepared all the slides. I think to make the process less painful, I'd have to have a way to get feedback a lot earlier.

I proposed something pretty ambitious for OS Bridge, which is a hands-on Haskell tutorial. I would have to prepare the tutorial materials -- code with "fill in the blank" pieces -- from scratch. Likewise, for LambdaJam, I proposed a talk on a project I've been wanting to do (a "traveling salesman" approximation implementation in Haskell -- for fun, applied perhaps to a data set like the list of Hosteling International hostels in the US), with the idea being that the talk would give me motivation to actually implement it. But now that I would actually have to write all that code in less than 2 months, it doesn't look as appealing to me.

I think what I need in my life now are things to do in my free time that I can do with other people and that don't feel like work. Unfortunately, preparing talks doesn't meet either criterion: I have to do it alone, and it feels like work. And I can't do it on the clock, since it's related to my job but the talks aren't about what I actually do at work (since not all of it is open-source).

In the past, giving talks has seemed like a way for me to get bonus points at work, but the last talk I gave -- at Open Source Bridge a year ago -- backfired in that sense. My manager (at my previous job) complained that I "gave too many talks" (because I gave one talk in two years) because I spent the two weeks before the talk preparing slides and not doing much else. That experience discouraged me from giving more talks in the future. Since what the talks would be on would be only loosely related to my job, I don't necessarily expect negative feedback for giving them (since all the prep would be in my copious free time), but I don't expect it to be a big positive, either.

So the calculation I did was that preparing the talks was likely to give me more anxiety than satisfaction. And in fact, that would still be true even if I did only one of the talks. So I declined. I still feel like I'm passing something up, but the cost of accepting the opportunity seems too high for me right now. Of course, that could change in the future, and there will always be more conferences.

I still plan to go to Open Source Bridge -- there are too many good talks not to, I'll be passing through Portland anyway, and it's a great chance to see a lot of friends. I don't know what the future holds for me, career-wise, so right now, putting in extracurricular effort to be more established in the tech community doesn't seem like a good investment: I don't know if I'll be in this community in two years. It's uncomfortable to be in this liminal state, but I think the way to deal with that discomfort is to experiment with actually being nice to myself and giving myself enough time to satisfy needs that don't have to do with writing code.
tim: text: "I'm not offended, I'm defiant" (defiant)
Content warning: discussion of rape, domestic violence, sexual harassment, and victim-blaming in linked-to articles.

I'm about to submit I just submitted a pull request to add my name to the Tech Event Attendance Pledge. Specifically:

  1. I will not attend any tech events where Joe O'Brien (also known as @objo) is in attendance.
  2. I will not attend any tech events without a clear code of conduct and/or anti-harassment policy.

For me, the first item is likely to be a moot point, since I'm not a Rubyist (although I play one on TVpodcasts). Even so, I think it's important for me to explicitly say that a space that's unsafe for women is a space that's unsafe for me. And a space that accepts harassers, abusers, or rapists who have not been held accountable or shown remorse for their actions -- whether we're talking about Joe O'Brien, Michael Schwern, or Thomas Dubuisson, just to pick a few out of many examples -- is an unsafe space.

The second item is more likely to affect my day-to-day activities, but fortunately, the two conferences I'm most likely to attend in the future already have anti-harassment policies. Open Source Bridge's code of conduct is a model for all other events of its kind. And ICFP (along with all other SIGPLAN conferences) has an anti-harassment policy. At this point, there's no reason for any conference organizers to not have already done the work of establishing an anti-harassment policy (and it's not much work, since the Citizen Code of Conduct is available and Creative-Commons-licensed to permit derivative works; it's the basis for Open Source Bridge's code of conduct), so there's no reason for me to speak at or attend a conference that doesn't have one.

Profile

tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Tim Chevalier

August 2017

S M T W T F S
  12345
6 789 101112
13141516171819
20212223242526
2728293031  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags