Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"but other than that am I missing something?"

I think the big one is that a futures based system no matter how you swing it lacks the characteristic that on an unbuffered Go channel (which is the common case), successfully sending is also a guarantee that someone else has picked it up, and as such a send or receive event is also a guaranteed sync point. This requires some work in the compiler and runtime to guarantee with barriers and such as well. I don't think a futures implementation of any kind can do this because without those barriers being inserted by either the compiler or runtime this is just not a guarantee you can ever have.

To which, naturally, the response in the futures-based world is "don't do that". Many "futures-based worlds" aren't even truly concurrently running on multiple CPUs where that could be an issue anyhow, although you can still end up with the single-threaded equivalent of a race condition if you work at it, though it is certainly more challenging to get there than with multi-threaded code.

This goes back to, channels are actually fairly heavyweight as concurrency operations go, call it two or three times the cost of a mutex. They provide a lot, and when you need it it's nice to have something like that, but there's also a lot of mutex use in Go code because when you don't need it it can add up in price.



Thanks for taking the time to respond. I will now think of Channels as queue + [mutex/communication guarantee] and not just queue. So in Go's unbuffered case (only?) a Channel is more than a 1-item queue. Also, in Go's select, I now get that channels themselves are hooked up to notify the select when they are ready?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: