tim: Mike Slackernerny thinking "Scientific progress never smelled better" (science)
[personal profile] tim
On Twitter, [twitter.com profile] mcclure111 asked -- and I'm wildly paraphrasing here, since apparently it's impossible to search for a tweet that's more than a week old:

"Research languages seem to focus on only one question. Rust focuses on ownership types. I'm not sure if they're useful or not. They 'deform the code' and I don't know whether it's worth it."

It amuses me to see people looking at Rust and concluding that it focuses on ownership types, because when I started working on Rust, less than two years ago, it didn't have ownership types at all. When I started on the Rust team as an intern, I thought that Rust was centrally about investigating the usefulness of typestate. (In Rust, typestate was a system of assertions that were visible to the typechecker but relied on dynamic checks for soundness. We removed it long ago.)

But later, the team decided to get rid of typestate, and add ownership types. I think the decision to add ownership types happened during the three months in between the end of my internship and the start of my full-time employment at Mozilla, and I didn't pay a lot of attention while I was on leave. But, my understanding is that the decision got made not because anybody said "I really want to do research on ownership types", but because all of the other options had been exhausted. Way, way back in the day, Rust had neither automatic GC nor ownership types. It only had automatic reference counting (that the compiler manages for you), which necessitated a cycle collector that never quite got finished. But ref-counting everything just added too much overheard. Everyone figured that GCing everything would also add too much overheard. So that left what's known in the programing languages literature as "regions", though we've come to call them "lifetimes" instead.

Also, calling Rust a research language is funny to me because -- as its name reflects -- we've tried hard to avoid incorporating new technology into it. We haven't always succeeded at failing to be novel, but we have a rule of thumb of not including any ideas in the language that are new as of the past ten years of programming language research. The field of programming language is full of old technology that hasn't been put to use in solving problems that it's exactly suited for. The goals with Rust were to avoid reinventing wheels, and see what the past had to teach us. I can't blame anyone for thinking Rust is a research language, though, since it is being developed by Mozilla Research.

But anyway, back to ownership types. Sometimes, people new to Rust say that having three different sorts of pointers is complicated, and do we really need to have that? There was a recent thread on the rust-dev mailing list about whether Rust's syntax is "intimidating" for newcomers, and a large part of that discussion was about the syntax for ownership types. Part of the source of the original confusion had to do with explicit type declarations that weren't really necessary, that were only present to work around a bug in the compiler (that got fixed later). It's hard to say much more without an example, so here is one.

First of all, let's do something really simple that only uses @-pointers. You can think of @-pointers as being garbage-collected, though right now the compiler still inserts reference counting code to reclaim them when no longer needed. The compiler knows where in the program an @-pointer is guaranteed to be initialized (for simplicitly, let's just say that you must initialize any variable x of type @T for any type T as soon as you declare x, though that's not actually true in Rust), but doesn't know anything about where it's guaranteed to be no longer needed (and thus safe to deallocate).

struct IntList {
   head: @int,
   tail: Option<@IntList>
}

// And no, I'm not going to call this `cadr`
fn head_of_tail(l: @IntList) -> @int {
    match l.tail {
        Some(tl) => tl.head,
        None     => l.head
    }
}

fn main() {
     let two = IntList { head: @5, tail: Some(@IntList{ head: @7, tail: None}) };
     let one = IntList { head: @2, tail: None };
     assert *head_of_tail(@two) == 7;
     assert *head_of_tail(@one) == 2;
}   

This code defines a simple non-empty linked list data type (for simplicity I'm requiring the elements to be integers, since explaining Rust's syntax for parametric polymorphism would be distracting) and a non-failing (total) function that returns the head of the list's tail, which is to say the second element in the list if its length is greater than 1. What if its length is 1? Then head_of_tail returns the list's head. This is a pretty silly example, but it's enough to show the point.

Supposing that we're using a tracing garbage collector, the GC will traverse all the pointers in the program (starting from two and one, since they're variables that live on the stack, and thus are roots), and deallocate any memory that's no longer needed. So if the GC happened to run between the first and the second assert statement, it would free up two, including the @-pointers in its head and tail fields. (With automatic reference-counting, actually, garbage can often be collected exactly as soon as it's no longer needed... but I already said "assume we're using GC".

This is a tiny program, but imagine one that allocates lots and lots of IntLists. Maybe you don't want the garbage collector to have to collect IntList nodes -- GC generally involves some time and space overhead. So let's try something different.

struct IntList {
   head: ~int,
   tail: Option<~IntList>
}

fn head_of_tail(l: &IntList) -> ~int {
    match l.tail {
        Some(tl) => copy tl.head,
        None     => copy l.head
    }
}

fn main() {
    let two = IntList { head: ~5, tail: Some(~IntList{ head: ~7, tail: None}) };
    let one = IntList { head: ~2, tail: None };
    assert *head_of_tail(~one) == 2;
    assert *head_of_tail(~two) == 7;
 }

This code is similar, but I've changed @ to ~ everywhere, except in the signature of head_of_tail. A ~ pointer in Rust is owned: there's a known location in the code where it becomes allocated, and a known location where it becomes deallocated. In this code, the compiler could safely conclude that the end of the block is that location, for both one and two. Or it could be more precise, and free one immediately after the assert statement that uses it. In a large program, this more precise knowledge about pointer lifetimes could potentially keep the memory footprint low.

Also, you'll notice that the head_of_tail function takes an &IntList: finally we meet the third kind of pointer in Rust. head_of_tail now takes a borrowed pointer: this means that head_of_tail can't assume anything about whether the contents of l live in the shared heap (where @-pointers point to), the exchange heap (where ~-pointers point to), or even the stack. The other side of the bargain is that the code the compiler generates for head_of_tail doesn't have to take care of cleaning up l when it's done with it, because l's type guarantees that head_of_tail's caller (or its caller's caller...) will do that.

Moreover, head_of_tail returns a copy, because it wouldn't be safe to make a pointer to an owned box that the owner doesn't know is there. I'll show in the next example how to avoid this copying.

So far, the two versions of head_of_tail differ in how much information they give you about the result. The first version says "I'm going to give you a pointer to a box that might have any number of pointers into it, any of which could potentially be the last remaining pointer (and thus the one that would cause the box to get reclaimed when its lifetime ends); also, these pointers might exist in different tasks [Note: I wrote that part while sleep-deprived; it's not true.]". The second version says "I'm going to give you a pointer to a box that might also have multiple pointers into it, but all the pointers will be in the same task and there will be a single, distinguished pointer that 'owns' the box, so when the owner dies, the box will also die." The same property might have been true in the first program, every time it runs, but the types don't tell you that.

Here's a final version that's as precise as I can make it, but doesn't copy any boxes either:

struct IntList {
   head: &int,
   tail: Option<&IntList>
}

fn head_of_tail(l: &L/IntList) -> &L/int {
    match l.tail {
        Some(tl) => tl.head,
        None     => l.head
    }
}

fn main() {
     let one = IntList { head: ~2, tail: None };
     let middle = IntList { head: ~7, tail: Some(&one) };
     let two = IntList { head: ~5, tail: Some(&middle) };
     assert *head_of_tail(&one) == 2;
     assert *head_of_tail(&two) == 7;
}

Here's where we start getting into the kind of syntax that Rust newcomers often find intimidating. Let's look at head_of_tail first. It still takes a borrowed pointer to an IntList, like last time, but its return type is different. Also, we're writing &L/IntList instead of just &IntList like before. The L is a lifetime variable. If it helps, you can think of head_of_tail's signature as a universal claim: "For all lifetimes L, if you exhibit a borrowed pointer to an IntList with lifetime L, I can give you back a borrowed pointer to an int whose lifetime is also L." The code for head_of_tail shows this claim is true.

In the previous example, the type &IntList was actually short for &L/IntList, where L is an arbitrary name. Rust lets you omit the name of a lifetime if it's not used anywhere.

Looking now at main, notice that all intermediate list nodes have to be named -- that's because borrowed pointers should generally point to values with names (otherwise, the pass in the compiler that checks for consistent lifetime usage will probably complain that you're borrowing something whose lifetime is too short). In any case, we've successfully transformed the first program to a version with no garbage collection and no copying of boxes. I hope it's not too hard to imagine how this approach could scale up to a bigger program.

Now that I've demystified the syntax a bit, we can ask once more: is it worth it? Well, as with almost everything in engineering, the answer isn't "yes" or "no"; it just depends where you stand on a couple of trade-offs.

Syntax vs. semantics It's been a bit jarring for me to see how many of the discussions I've seen, or been involved in, about Rust center around syntax. As a grad student doing programming languge research, I was taught to focus on semantics, not syntax, and when I read code, the parser in my head generates an abstract syntax tree without me thinking about it too much. So the question of whether a particular syntax is easy or hard doesn't mean much to me. Everything is hard before you learn it, and everything is easy once you've learned it well enough. What interests me, personally, about Rust is that you can say more about what your code means than you can in C or C++. Learning Rust certainly wasn't roadblock-free for me -- and I've had to learn over and over again, since it's changed so much since I started -- but I'll be honest and say that once we pass the point of "good enough", I find syntax conversations to be really boring. So I'm probably not the best person to ask about syntax. Fortunately, we do have people on the team who care and have tried pretty hard to improve some of Rust's syntax from what it used to be -- I think their efforts have paid off.

Time now versus time later It's all very well for me to say that everything is easy once you learn it, but that doesn't answer the question of whether it's worth it to learn a new language instead of doing all the other things you could be doing with your time. So here's why, in my opinion. I think one reason why some people might think it's easier to just keep writing code in C or C++ than to learn Rust is that it probably takes less time, coding in C, to put something together that "sort of works". Since the typechecker doesn't do as much for you in C as it does in Rust, it's easier to get code to typecheck. But that doesn't mean it's correct or secure -- the price you pay for using a languge with lenient static checking is that you have to achieve confidence in your code's correctness in other ways. That's what "time later" means: testing, debugging, and in the worst case, having your code fail out in the field and getting bug reports from users. In contrast, Rust's typechecker lets you find more errors up front; it's harder to get your code to compile, but from that point onwards there's just less to do to get it to work. In college, when I had a paper due in two weeks and I said to myself "I'll start it tomorrow", I was falling for the fallacy that my time today was more valuable than my time tomorrow. I thought that doing something more fun today would be more valuable to me than it hypothetically would have been tomorrow. The only way that would be guaranteed to be true, though, is if tomorrow never comes. In my opinion, choosing to use a language with lenient checking because satisfying a more powerful typechecker takes too long is just another form of procrastination.

So is it worth it for you to use Rust and deal with annotating your code in ways you may not be used to doing? It depends on your goals. If you want to write code whose performance matters, I'd say it's worth it. Even if you don't, it's still possible to experiment with Rust just using garbage-collected data structures, as in the first example. Even though I've mainly talked about ownership types here, there's plenty else to explore about Rust.

That said, I think ownership types are pretty neat, and I didn't come to that position immediately. I spent years programming in Haskell, and I believed that you should write your code as declaratively as possible so that the compiler can take care of all the operational details. I no longer believe in such universal rules. (Not to attribute any such beliefs to other Haskellers, mind you. I was young, and came to functional programming looking for something to believe in.) Even aside from performance concerns, there's a certain elegance in being rigorous about the lifetimes of your program's data. Writing code with borrowed pointers makes me think in a way I've never thought before, and that's cool. (Part of the inspiration for Rust's type systems come from the research area of linear types and affine types, if you want to dig deeper about what kinds of thinking these sorts of type systems enable.) Constraints provoke creativity, so that's another reason, besides performance, to be interested.

My own view of language design is that programmers like it when language designers give them the option of making precise and explicit the formalisms that previously existed only in the insides of programmers' heads. That option is an especially helpful one when combined with inference and reasonable defaults, like we try to do in Rust. It's not just instrumentally helpful towards the goal of a certain software artifact: increased precision and explicitness also help you debug your own thought processes and make your own thinking more rigorous. And isn't that the entire reason why we write code in the first place? (Aside from fame and money, of course.)

I think this is the longest post I've ever written under the research tag on my blog! Comments, criticisms, corrections welcome; you can comment without creating a Dreamwidth account, using OpenID. For further reading, take a look at the borrowed pointer tutorial that's part of the official set of Rust tutorials, and many of Niko Matsakis's blog posts.

(no subject)

Date: 2013-02-03 10:08 pm (UTC)
From: [personal profile] hitchhiker
very good post. shows the motivation behind explicit lifetime parameters nicely, too.
luinied: The robot catches many monkeys. (academic)
From: [personal profile] luinied
I'm so far from a C++ expert, but, from what I can tell, these are things that good C++ programmers already think about, and having a clear idea of who owns what pointer is really important in big C++ projects. The language just doesn't help with this the way you'd like it to. And this is obviously what I'd say given my background, I can only see it as a good thing for a language to help out with the exacting detail work that has to be done if you want things to work right.

Does Rust use regions under the hood at all? Because lifetime variable sound a lot like region variables.
graydon2: (Default)
From: [personal profile] graydon2
"Under the hood" there are lifetimes associated with stack frames and sub-scopes of stack frames (both of which only admit fixed-size allocation on scope-entry), plus the two heaps, plus an (internally unsafe, externally safe) arena library that lets you do dynamic allocation pinned to a given lifetime, providing borrowed pointers to its caller.

It does not do the sort of automated "moving heap allocations into their spanning region" thing you might be thinking of from (say) the ML kit. That is a very cool thing for ML kit to do, but it's also a bit too implicit-and-magic for our taste. Or at least my taste! Others on the team have demonstrated a higher tolerance for fancy types than I have. I get confused easily.

Typo?

Date: 2013-02-04 12:56 pm (UTC)
From: [identity profile] ccshan.myopenid.com
"it would free up two" -> "it would free up one"?

Thanks for writing this!

The tweet you're looking for?

Date: 2013-02-04 07:21 pm (UTC)
From: [identity profile] skyborne.myopenid.com
Is this the tweet you were looking for?

https://twitter.com/mcclure111/statuses/294953109493514240

(Agreed that twitter search is too real-time focused, so I put `site:twitter.com mcclure111 rust` through Google.)

what about non-linear data-structures?

Date: 2013-07-28 04:59 am (UTC)
From: [identity profile] unsoliciteddave.blogspot.com
Owned pointers seem like a really neat solution for linked lists, but don't they break down quickly for many non-trivial data-structures? Basically any which are non-linear (require more than one pointer to the same data), and certainly any that are cyclic.

What data-structures fit into the owned-pointer model vs needing general case memory management.

singly-linked-list : ok
dynamic-hashtable : seems possible
sorted-hashtable : ?
Doubly-linked list : ?
kd-tree : ?
octree : ?



Re: what about non-linear data-structures?

Date: 2013-07-30 11:41 am (UTC)
From: [identity profile] unsoliciteddave.blogspot.com
The doubly-linked list implementation does not use borrowed pointers, it uses *raw* pointers for the backlinks..

https://github.com/mozilla/rust/blob/master/src/libextra/dlist.rs

struct Rawlink<T> { priv p: *mut T } // <<--- that is a RAW pointer...

struct Node<T> {
priv next: Link<T>,
priv prev: Rawlink<Node<T>>,
priv value: T,
}

If I understand borrowed pointers, they are "safe" in that they are only allowed to exist deeper in the stack context. This means they can't be allowed in heap data-structures (and it means Rust must be apartment-threaded) -- otherwise the borrowing would not be safe.

As far as I can see, this means any cyclic datastructures break the owned/borrow pattern. The structure must be built unsafe with owned/raw, or it must be built with @ pointers.



Profile

tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Tim Chevalier

November 2014

S M T W T F S
      1
23 4567 8
9101112 131415
1617 1819202122
23242526272829
30      

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags