Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Part of me questions the old view that functional programming isn’t performant.

If you think about it, 1.) most performance comes down to memory access. 2.) It is widely known that accessing memory off the stack is faster than allocating memory on the heap. 3.) calling a function pushes everything onto the stack, starts a new stack pointer, and when it’s done releases everything from the stack.

So my thinking is if by definition functional programming is about organizing your program as a pure composition of functions, doesn’t that also imply an ideal memory access pattern? Can someone tell me where I'm wrong?

I guess the one issue where this comes into doubt is with strings of unknown length, and other data types that must be heap allocated, but even in those cases it seems that problem would equally affect all other paradigms.



Then there's memory locality, and shared memory. if you keep bitbanging the same area, your processor might be very happy. FP will create temporary structures to derive the next computation and it may cost a bit much. I say may because I have no experience in this, just semi educated guess. Our processors are well made to hammer the same insts on the same memlocs.

ps: of course FP shields you from having to manage correct state update seen in imperative programming, which might require a lot, a lot, of reads and writes to ensure things are not messed up.


I believe a lot of this boils down to so many data structures not being kind to state updates. And most ways of making that work out fine are by adding more data and indirection. Even the ways of making things a bit more "performant" typically do so by adding more data to the mix, not less.

This is even harder if you work with data structures like threaded trees, where there are so many links that even the zipper approach basically falls flat. (And the zipper already has performance questions if you do a deep edit, if I am not mistaken. What could be a single value mutation is now a rewiring of every link down to the value that you wanted to change.)


> 3.) calling a function pushes everything onto the stack

This is a very C-centric view of it.

In FP, sometimes they're functions, sometimes they're closures. Sometimes they're fully applied and sometimes they're partially applied.

You're probably on the heap, not the stack for two reasons: tail recursion, and variables very often outliving the return.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: