Core.async a Clojure Library for Asynchronous Programming by David Nolen - Transcript
Note: This is the transcript from the "Core.async a Clojure Library for Asynchronous Programming" presentation by David Nolen from The New York Times. The video of his presentation can be found here: http://g33ktalk.com/core-async-a-clojure-library/
David Nolen: My name is David Nolen. I’m going to talk a bit about ClojureScript in core.async. How many people here have ever read about Core’s communicating sequential processes? Cool. That’s good. Has anybody ever tried using Golang, Rob Pike's Golang? Only one, okay. Has anybody used a language that actually implements CSP? I mean, Go is one. So not that many. So this is something I think is really funny that something that nearly everybody has heard of, but nobody has tried. There’s been languages in the past. You had Occam-pi for the transputer, you had Concurrent ML, which was a variant of Standard ML that supported CSP. Then Go is actually really making waves. People really like it. I don’t really like it, but I think the CSP aspect of it is actually pretty cool. It very much holds very closely to Tony Hoare’s ideas.
So, Rich Hickey decided more or less to just copy Go’s interpretation of Tony Hoare’s original ideas, so I’m not going to assume that you know too much about CSP, and so we’ll go slow, but we’ll end up going fast later, so it won’t be boring if you think you know this stuff.
Has anybody here tried core.async? Okay, cool, cool. Sweet, So that’s good. If you’ve tried it, then you’ll see some cool stuff. If I’m moving too fast, just raise your hand and ask a question because I’ve looked at this stuff so long, I’m probably just going to assume that you can read my mind, and that’s probably simply not true. So, stop me if something doesn’t make sense.
This was actually explored in Concurrent ML. There was a thing called XScene, and also Rob Pike had done some cool X Window stuff in the '80s with one of his various CSP-like languages. I mean, Rob Pike has basically been writing the same language for twenty years now, and he finally hit the jackpot with Go. But he was also excited about the possibility of doing UIs with CSP.
So, here I have a go block, and I just want to show the semantics. If you have a go block, it always returns a channel. So, the result of a go block is always a channel. I have a demo page here, I’m going to refresh this, and you can see that I console.log the result from a go block and I get back a channel. So, the result of a go block is always a channel, not five, right? So, it seems a bit strange, I returned five, but what actually is going to happen is that five is inside that channel, the channel has that value, but I haven’t read it out.
Now, I want to be able to read something out of that channel, and so, if I’m not familiar with core.async, I might think that I can do this: I’m going to console.log, go five, and it should return five in a channel, and I’m going to try to read from the channel. The angled bracket bang is like read a value off a channel if you have a channel. And I’m going to get a nice little error here that says that you can’t do that. You can’t do a read outside of go blocks. If you want to have this illusion of being able to do things asynchronously without writing callbacks, you have to do all of your operations inside the go block. This may sound limiting, but you’ll see that you can actually do quite a bit. In Clojure, just to explain, in Clojure, you can do this with the angle bracket double bang. So you’re allowed to do reads off of channels outside of a go block, but it means that you could potentially tie up a thread. So you can do angle double bang, or greater than double bang, which is a put. You can put or take from a channel.
So, now that I know this, I can be, well, that’s okay, I’m just going to wrap that expression in another go block. And then I can do a read off that channel, and then we see console.log five, which is what we want, right? So we’re allowed to do puts and takes on channels with the put and take operator, as long as we’re inside of a go block.
So, the primitives are really go block and channels, and we have put and take operations. So here I make a channel and then I have a go block and I say, we got here, I print that, I read off the channel, and then I write this other console.log statement, and it says we’ll never get here. And this seems a bit strange, and I’ll explain.
So, the semantics of CSP is that rights and reads, when you put and you take, if you put something, somebody has to take from the other side. Otherwise, inside that go block it becomes a suspended operation. If I take, and there’s nothing there on the channel, then again the operation is going to suspend until somebody puts something. So this is really important, and it’s very much how Golay works and how the original CSP model was defined. Synchronous operations, right?
So, in order for this to work, all the magic in core.async is about, in ClojureScript, making it appear that we have lightweight threads so that we’re not actually going to be eating up threads while we’re waiting for something to happen. Is anybody confused about what I just said?
Audience: What does angle bracket bang mean?
David: Take a value off the channel.
Audience: What value is …?
David: So, the channel, it’s not a place, it’s a conduit. I can put stuff onto it, sort of into it, so that people can take things out or I can take things out, right? The only thing that’s important is that, if I try to take something out, and there’s nothing there, I’m going to be suspended until somebody puts something there.
Audience: What kinds of things could you put there?
David: Good question, you could put anything you want, though I recommend putting immutable values.
Audience: And the people know what kind to type?
David: It’s purely dynamic. So it’s not like in Go. Google’s Go, you have to give the channel a type. So these are untyped channels we could put anything we want, though you really should put immutable values, so you can avoid the problem you have in Go with race conditions. You can have data races if you put mutable values onto a channel, right? So we don’t have that problem.
Audience: So if I put sequence stuff, things with types, and I read them, I will get back a sequence of things…?
David: Anything you put in will come out the other end.
Audience: So you could put a channel in?
David: That’s one of the coolest things you can do, you can put a channel. Channels are first-class, so you can put channels onto channels, and there’s really cool tricks you can do with that. Good questions. Keep asking good questions because I know you can’t read my mind.
Okay, so, I make a channel. We’re going to go there and we’re going to be able to make progress because I have a separate go block that writes something into the channel so that the top guy can proceed. This sounds pretty crazy, right? Because the code is out of order, but we’re still going to get progress. So, that’s kind of interesting. So what this means is that first go block is going to run, it’s going to get suspended by the read, the next go block is going to run, allowing the code above to continue. Make sense?
And then, all I’ve done here is the exact same code, it just flipped things around, but this is very interesting. On the top one, we’re going to write into the channel, even though I have no expressions later there. It’s going to be suspended, right? Because until the bottom go block reads, and I can show you that this is true by going—right? Because nobody read it. But, if I do this, boom, totally works. It’s pretty cool. This is the essence.
So, you already can see that there’s all these amazing games you can play because you actually have really fine-grained control over asynchronous operations, which is not normally true when you’re doing async stuff. So let’s see less trivial stuff, things that are more real world.
So, here is a very simple thing called events, and what it does is it takes a DOM event, and it converts it into a channel, which will output those events as they happen. So, here I can say this is going to be an HTML element, its going to be the type of event I want to listen to in the DOM, and what we’re going to do is we’re going to add a listener on that element of that particular type, and we’re going to put each event onto that channel. And you notice events returns a channel. So, this is a very common pattern. You construct some source, and you get a channel as a value, so you can listen on it, or you can write to it or whatever. Does that make sense? So, you’re going to see a lot of constructing channel functions. Yes?
Audience: What’s the difference between greater than…?
David: This is read and if it’s the other direction this is take.
David: Oh, okay. So, why do I have this? You can call put and take, which are basically the async versions of these guys, like this is actually an asynchronous put. So that I don’t have to be inside of a go block to put something onto a channel, right? I told you that these guys can only be used inside of go blocks, but there are places where we’re not in the asynchronous world yet. We’re not inside sort of a channel thing yet. So, we need some way to get things into the channel outside the rest of the world, right? So, put and take are the async analogs of these, which have the semantics of being synchronous when you’re in a go block.
Okay, so, this is kind of crazy. So, this is one of these cool things that, like, if you’re into functional programming, you’re like, wow, this is pretty nuts. This looks like it would not terminate, right? I say go while true, which means this will run forever, and this works because, if I don’t move the mouse, this go block doesn’t do anything. If I move the mouse, it’s going to read it, print it, loop around, and, if I stop moving the mouse, it’s not going to do anything. So, this is like a mini-event loop. So, unlike this thing where we normally have one giant event loop, we can actually make mini-event loops with core.async.
So, I’m going to reload this. Let me get all of our mouse events. And that’s pretty nice. We can actually use this recursion as a pretty cool way to maintain state. We can actually hold state in a loop. If you read up on Concurrent ML, you see that they do this trick, this is not a new idea at all.
Okay, so I want to be able to map over channels. I have a channe,l and what I want do is I want to take a channel and for every event that I get out of that channel, I want to apply some function and write it to another channel. So, this is sort of like the first channel transformation that we’re going to see. It takes some function, which we’re going to apply to every value that comes out of in and what do we do? We make a channel called out and, forever, we’re going to read from in, apply F, write it to out. Okay.
We can do filter. This is the channel versions of functional operations you guys have probably seen before. So, here, we take some predicate function, we take the in channel, and, forever, we’re going to read from in. If it passes the predicate, we’re going to write it to out. So, down here, I’m going to make this ridiculous predicate which say: only if X is divisible by five and Y is divisible by ten are we going to do anything. But this is kind of nice, everything composes. So, I have the original source, I convert into a vector of numbers, and then I filter out the ones where the X component is divisible by five and the Y component is divisible by ten. And then I print out the contents. So you’re going to see here that that’s the only values that appear.
David: Well, except what you’re asking for is impossible. It’s just a different kind of operation. There’s no way to unify the real synchronous world and asynchronous world. It can’t be unified, it’s not possible. I mean, there’s no way to do it. Async stuff is async stuff. Okay, so this is where it gets really fun. Go ahead.
David: So, futures are much more monadic. So, every time you do something to a future, you’re going to get a new future.
David: You do. You do need a new channel, but, for every event, you don’t have to construct anything new. This is something that I don’t like about promise- or future-base models is that every step, you have to construct something. Whereas I can generate a pipeline, and it’s constructed once. So all those mouse events that are flowing through the system, I only constructed three channels, whereas in a future based model, every event, at every step in the pipeline, needs a future.
David: I don’t want to get too far into Elm, but Elm is very cool. It’s an implementation of FRP ,and it tries to solve these FRP problems. What I’m demonstrating is very FRP-like. FRP does do one thing which I did not realize that it did. It makes a very big design trade-off because it wants you to be able to use signals in a very functional way, but, in order to do this, it actually imposes a global ordering on all events. So there’s a cost associated with this, and, in the CSP model, we say we don’t want to pay that price. So CSP gives you the primitives to manage synchronization, but it does mean that, occasionally, you have to go in there and make sure that things happen in the right order, whereas, Elm tries to do that for you, but at the global cost that every part of your program is going to pay for.
Audience: … How is it not nomatic?
David: So, I mean people way smarter than me know about this. I’m not going to explain this, but Eric Meyer worked for years on this on C#’s async/await. We use the exact same compilation strategy as C#’s async/await. And async/await is basically what theorists say is comonadic. So, core.async is also similarly comonadic. I have no idea what that means.
Audience: … quote Eric Meyer, you know, forget it.
There’s a way to communicate between windows and browsers now that has much lower latency, and we also use setImmediate ,which is present in Internet Explorer as well as Node JS. And setImmediate is really fast. I’m not going to demonstrate it, but there’s a really nice benchmark in Go where you do thread-ring. You make one thousand channels, and you push a value through it on one side, and how long did it take you to get out the other side? Go can do this in about sixty to eighty milliseconds, core.async and Chrome can do it in about a hundred and sixty, which is only twice as slow which is pretty wild. And that’s through the dispatch mechanism that we have just in the browser, so it’s pretty good. So that’s pretty neat, timeout channels.
So, nondeterministic choice. Often, when you write a piece of asynchronous code, you want your little block of code to do something based on whether this happens or that happens. You basically want to listen in on multiple channels, and whoever gives you a value first, you want to conditionally operate on that. Does that make sense? So, it’s a way to take multiple inputs and process them as they arrive. In Go, this is called select. Questions about that?
So, here I’m going to make a channel, and then I’m going to make a timeout channel, and this perimeter operation is called alts, and you give it a vector of channels. So this is a first-class operation. You can actually grow and shrink that vector, and you can do really cool tricks because in Go it’s static unless you use reflection. By default in Clojure, it’s dynamic. You select dynamically over a vector.
So of course, C is never going to provide a value because we never put anything into it, but the timeout channel will provide a value. And so, you saw, after a second, the timeout channel closed. Oh, and I should explain this: alts returns the value that was read off the channel as part of a vector. And I’m just structuring this here. So this is the value, and this is the channel that responded. It actually responded, so, this way, you can do conditional matching. Like did this channel respond, or did that channel respond. This allows you to do conditional logic.
Audience: So when a channel closes up you get nil?
David: When a channel closes you get nil, yes. And that actually is important. You can’t put nil on a channel. Nil is a terminal value. So, actually, if you try to put nil on a channel, it’s going to throw, and if you read a nil off a channel, you know it’s closed.
So, there’s alternate syntax which you can use which makes the alting operation look a little bit more like con, which is the Clojure’s version of if-else. So, it’s the same code, but here I can go alt and say either the channel, which is never going to respond because there’s no value on it, or the timeout channel, and what you do is I’m matching on this channe,l or I’m matching on that channel and then I get the value here as a parameter to the body. This is basically an implicit body of a lambda. If you run this guy, same thing. It’s no different, it’s just sugar—sugar for matching on a particular channel.
I could have written this like this, I’m going to do a little bit of live coding. Con P equal, like that. I can go if it equaled C log JSConsole, a channel and a value are closed. And, hopefully, I didn’t make any mistakes. I love ClojureScript, it gives me errors. All right, so we should see the same thing happen. Yes. So it’s just sugar for doing that. It’s just sugar for conditionally matching on which channel responded. Make sense? Okay.
So here’s where it gets really fun. When we do the recursion, I’ve been using while, while true just to represent an infinite recursion, but here we can make a loop and we can store state in the loop. So, this is a channel which enforces that we only get distinct values out of it. We’re only going to get distinct values out of this channel. So, if somebody puts in the same value more than once, we’re only going to see the first time that it appeared, which is very useful in user interfaces. So, here’s distinct that takes a channel, and here’s the out channel. We’re going to loop the last value that we saw, of course, initializes to nil, we’re going to read it in. As long as what we read in does not equal the last, then we can write it to the out and then we recur. So this will only write to out when we get a unique value.
So here, once again, it’s like all this stuff composes in a functional way, except it’s asynchronous programing. So I’ve got key presses, I’m going to convert them into their key codes, they only care about the key codes, they only want the distinct key codes, I’m going to print them out. So, if I’m up here, and I’m pressing D, it only worked once. I’m pressing A, F, if I start typing lots of different keys, okay, I can’t repeat a key.
David: Because I was typing really fast.
David: Yes, it’s got to be in a sequence. But, of course, I could keep memory. This is actually fun.
Audience: … create a setup
David: Yeah, exactly, never again.
We’re going to call this seen. When not contain seen, and what we’re going to do is we’re going to conj seen X, live coding. Code review, does it look right? So, I have a starting set. If I haven’t seen it before we write it out, I add it to the seen set and hopefully this is going to work. Never again. I don’t know why that happened, whatever.
Here we go. I’m running out of keys. All right, that’s pretty cool, so we can hold state in a loop. Here is a possible implementation for something called fan in, which some of you might call merge. So, fan in can take a vector of channels, as many as you want, and it’s going to merge them into a single channel. So this is really useful. You have tons of inputs coming in, and you want to be able to read from one thing. You don’t want to have to manage all these different sources. You can merge them into a single source. So how does this work? We have a vector of ins, but what’s really cool about this is alts takes a vector, so this is trivial to write. Alt already takes a vector of channels, and I only care about the value, I don’t care about which one responded, and I can just write that value out. So, this will take a bunch of different things and write it to out.
I’m going to move a bit more quickly because I don’t want to talk forever. So, I want to demonstrate here that one thing that’s really cool about CSP is CSP works as both a push and pull system. So this is really important. I can make generators. Remember, I said, if I write, and nobody reads, then I have a process that’s stuck. But, I can make a process that simply is a generator so that I can source values out of it. It’s not actually generating its values asynchronously, it’s just something I want to be able to string things out of.
So this is what this is, this is a channel that just simply produces ints. And we can do this because we can hold the current integer inside the loop. So, if somebody starts reading off of this, they’re going to start getting integers starting at one. Here’s a different channel, which is also pretty useful. It’s a channel where you, given a millisecond delay, it will simply bang a value onto the channel. It’s just going to put the current date at different intervals. So, you could use this to throttle something. Think about it, you can start using these different channels and compose them so you could use this to throttle another channel.
So, you’re going to see what’s going to happen every time I press space bar, so I’m using the interval, and I’m using ints. So ints is a generator, and I’m using the bang from the interval channel to generate the integers. If I press space bar, it’s going to switch. And I have multiple loops going on here ,so I’m not just doing the intervals channel I’m actually using it as an accumulator, so I have an inner loop ,which is actually reading off of each process and then adding the next set of integers. If I press space bar again, I go back precisely to where I was in the other process. So this is pretty complicated. I’ve composed a bunch of channels, and I basically can do cooperative multitasking.
David: So, you have to handle that case. Because what’ll happen, if you don’t handle it, you’ll probably accidentally write a nil onto another channel and that will blow up. You’ll get an error, and it will tell you what you tried to do.
David: You’ll only get it at run time. So, when you design your system, you have to handle closing.
David: So, I kept my case simple because you want it to be more robust. There was no such thing as fan in, so I wrote it. There is now something called merge, which handles closing. So, I would just use the provided merge. So, a lot of the stuff that I’m showing you, don’t use my versions. There are now official versions of these operators that are robust enough for that.
David: So, a lot of those libraries, they require you to use functional composition, so a lot of what I’m showing you is I get to write regular Clojure code. I don’t have to write everything in terms of combinators. So, if you’re going to do async.js, you’re sort of limited to everything has to be combined by functions, which may or may not be nice, depending on what you’re doing. There’s a much bigger example that I’ll show you that I would hate to write in that style. I’ve seen people do it in that style. That’s horrible, but that’s the main difference is that, inside of a go block, because it transforms your code for you into a stick machine, you get to write your code in a natural style. You don’t have to do everything through functional composition.
So we’re going to get quite a bit fancier. So this is just a bunch of boilerplate around the DOM.
Audience: In that last example, what happens to channels that are .. do they stop, do they fall?
David: They keep going, so, again, if you want to make it more robust, you’ll have to, from each one of those processes, should take a cancel. So that’s something again I’m trying to keep my examples simple so that you can mostly get it. This entire repost is on GitHub, so you can run these examples yourself.
Audience: Will you give us the, the address?
David: The address, yes. Okay, so—
David: I mean they are, they are queue. The channels are queue fundamentally. But we get a nicer interface towards them then when you normally have. Okay, so I want talk a bit about what I actually described in my description of this talk, which is how we can take these primitives and build extremely responsive interfaces. So I’m going to show you that I can use the exact same code for doing list item selection, like highlighting. I can use this same code whether I’m targeting the DOM, or whether I’m making a text adventure game. I could actually use the exact same async handling code for both interfaces, it doesn’t matter. So what does it look like?
So, what we’re going to do, we’re going to grab the DOM element, we’re going to allow hovering over the—and that is list element a UL, and we’re going to allow hovering. We’re going to have a hover channel over the UL for each of the LI elements. We’re going to allow movement by key if I press up or down, so we’re going to fan in. We’re going to allow hovering over highlighting something in the list, whether it’s the mouse, or whether it’s the keys, we’re going to support selecting if we click, and then we’re going to make a selector. And see, we can fan in three independent strings of events: hover events, click events and clicks. This is why merging strings is really nice.
So, there we have our hover. I can use the arrows, I can click, and you see it prints out the result down there, and what I want to show is that we can apply the exact same thing to a text adventure representation. So here, like this is literally almost identical, right? So I just want to use the window as the source of the key events, and they’re just a bunch of details around that, but you notice the selector. The critical function is unchanged, there’s not going to be any difference for doing this different visual representation. So there it is, down there at the bottom. So this is, to me, one of the biggest ideas ,and something that I’ve been looking for for a very long time when it comes to UI programing, which is that we design our UIs around events, but at a really abstract level.
Separated from I’m not going to build something in the DOM, am I going to build something on the iPhone? Is it going to use touches, or is it going to do mouse clicks, or I’m going to have to do a text UI, or an audio based UI, or braille? People actually end up rewriting these components over and over and over again for every possible target, but ,with this model where we actually really abstract out away from the concrete event sources, then we can have really, truly reusable components. We don’t have to rewrite that logic.
So I have—what’s the wifi here? The password. There’s no public one?
David: It’s okay.
David: Oh it worked, yeah.
David: Oh, docker?
David: Oh yeah, I see it. There we go.
David: Sweet. Okay, let’s try again. So, if you want to know more about this, I’ve written tons of blog posts. So, starting with this one, there’s a lot of fun stuff, a lot of the examples I’ve shown are available here, and then I spent some time doing a very extensive blog post because I was like, this is actually for something non-trivial, something that I would actually write for work. So I took an autocompleter. Basically, autocompleters like ComboBox autocompleters are actually a massive pain in the ass to write code for because it’s really complicated. You have to handle tab, you have to handle input blur, input focus. It’s crazy the amount of events you have to handle. You have to asynchronously fetch from some server. Anyways, it’s a good non-trivial thing. The autocompleter in JQuery is like five hundred lines of code, and it’s completely unreadable. The Twitter autocompleter is like thousands of lines of code, completely unreadable. Nobody can understand this.
So I was like it’s a good example to try can we build a more sensible system that’s constructed out of reasonable components? So here’s the autocompleter that I build with core.async. so I can go. So a lot of subtle things work. When I press delete, it disappeared. When I press tab, right mouse, mouse out, lots of subtle things, like, if you use google, they do lots of clever things like what happens if I click-drag, come back on an element, and it’s not the same one. I click-drag and come back on the same one. So this is all stuff that very serious autocompleters have to do in order for users to not be completely pissed off.
David: Yeah, Firefox is slow. I mean, there’s nothing we can do about that. I mean, I complain, I'm like, Firefox is slow, come on. But they’re working on it. A lot of the problems with Firefox are mostly, from what I can tell, due to garbage collection, and they’re working on a generational garbage collector, so, hopefully, that will fix ClojureScript on Firefox.
David: Well, I mean, part of the thing, and I demoed this, was that the idea is more like that, number one, if you’re doing that many number of updates, that’s pretty extreme.
David: I mean, people have already started building things with core.async, and we’ve done a lot. Like this blog post was written before we did a bunch of optimization. We’ve done a ton of optimization. I actually don’t know how we could make it any faster, it’s pretty fast, but, like, for example, here’s a hundred thousand updates on the DOM, and so core.async, because we can basically, I didn’t talk about this at all, but you can have buffered channels, and buffered channels allow you to basically split up the work. So this is a buffer channel, and you notice the UI is not locked up. It’s a hundred thousand DOM updates. That’s pretty cool. So there’s a lot of flexibility for doing interactive stuff. Other questions?
David: So you have to do specifically handle errors. You have to think, oh, this might fail, so I’m going to write an error. I'm going to handle it, I’m going to catch the error write it into the channel so that somebody down upstream can catch it and decide what to do, maybe wrap it again. But the thing that’s still cool about this is that normally when you have an error, when you use a promises model, you’re just going to get a mangled stack. So this actually allows us to handle errors across async boundaries of the event loop. Normally, the stack is gone. There’s no stack because it was an asynchronous op. And somebody else was expecting an asynchronous op. So I can throw an error, catch it, wrap it in another error, throw an error, catch it, wrap it, and then, when I have I,t I can actually open up that error and say okay this is the stack trace at this point in the event loop, this is the stack trace at this point of the event loop, this is the stack trace. Which is pretty cool.
David: So I’m sort of alluding to that with control channels. So you often will, like, I want to make a channe,l but I also want if I get back the channel, I want to give it a control like when I construct it. So if I put anything in the control, the other channel closes, it’s a common pattern. Actually, a lot of tricks that I recommend, it’s like the Go peopl,e they have a lot of tricks and, honestly, we mostly just copied the tricks that we’ve seen because they’ve been doing this for a couple years now. Other questions?
David: So it is sometimes tricky. Don't get me wrong. The main source of bugs in core.async, it’s not like, everything I’ve shown you is awesome because I’ve worked on it and spent some time putting it together. So, in CSP, you have this problem of deadlock, and that’s the problem where you inadvertently read, and then nobody will ever write, or you read, and nobody ever puts.
David: No, I mean you can’t really do this, but something that we’ve been working on that we just landed a couple weeks ago that needs more work, but it does pretty much work, is just source maps. So, ideally, source maps would allow you to set a break point on the thing, and then, when you get that spot you build least say what are my locals look like. So, honestly, what I’ve been doing up until that point was just print debugging and printing out what’s going on, so, ideally, and, again, I haven’t had time to test this, when you set the break point you get there and you can look at the stack and like where am I what’s got passed in. Other questions? Nope. Cool. Oh.
David: So Rich Hickey wrote a bunch of like a lot of things I handrolled, so Rich recently landed official versions of stuff that handles closing and all this stuff, publish, subscribe. I didn’t really talk about pub stuff at all, and so he has a lot of stuff around pub stuff, a lot of stuff around pausing, resuming, like all these things that are really nice when you want to build a system that you honestly don’t want to have to write yourself. They’ll be a part of the standard library, so you don’t have to do that. And then, a lot of that stuff I haven’t had the chance to play around with, but I’m really excited about that because that’s stuff that people don’t have to write.
David: Yeah, definitely still alpha, I mean, and it’s probably going to be alpha for a little while as people try out the standard library functions. I mean, a lot of what I’ve just shown you is just real primitive. And a lot of my examples I had to build stuff from scratch. You know, in the next few month,s people are going to stop doing that and use the standard stuff. But people are already deploying it. I mean, people are already writing production code. Especially on the Clojure side, it’s very solid, and there’s been a lot of—I’ve seen a lot of excitement about ClojureScript because of core.async. You know, frontenders are like, wow, this is pretty cool. I think it’s good.