As a case study we evaluated the performance of this along with other languages that use immutable data structures (perf testing) for a recent production app we were building. After some evaluation we had the opinion that the penalty of using immutable HAMT's on the Node/JS platform is just too high, at least in its current lib form. Immutable structures are often somewhat slower, but the relative penalty of using a HAMT on the Node/JS platform vs a straight mutable structure was much higher than other languages. Floats for numbers, conversions back and forth implicitly, pop count intrinsic missing, conversions in and out, and other such overheads make the relative penalty of an immutable HAMT vs a mutable JS map much higher compared to other languages such as .NET or Java let alone C/Rust.
Given Node is mostly single threaded as well some of the advantages (not all!) of immutability diminish on that platform making the performance penalty harder to swallow, at least for my use case.
It of course may not matter to your case, but I find the benefits of these structures come from their quicker clone/copy time in many applications, snapshot like characteristics, easy large diff's, etc. For this to matter the collection has to be of a decent size where a linear copy/equals/etc is too slow to work for the use case. At that point speed starts to matter as the costs of random lookup start to grow/drift further away from a standard mutable map and for JS we deemed the penalty too high and decided on another backend platform amenable to writing generic data structures.
Yes, I evaluated it. The complexity of knowing where to convert from/to plain JS, plus the extra library syntax to learn, plus the performance cost of toJS, made it a poor fit for my particular use case. Nearly as much of a hard sell at work as saying “let’s rebuild the UI in Clojurescript,” and without providing as much benefit.
My use case is pretty atypical though, and it’s worth checking out if you have more reliable data shapes/update paths.
The key design difference between the two is that immutable.js wants you to keep your stuff in its format (and operate on your stuff with its functions) until you hit some piece of code that needs a plain JS object. Whereas immer is scoped (mostly?) to update functions, and always returns plain JS. Immer seems to be designed to simplify the problem of only changing references to changed objects—and it looks like it does a good job of it.
So with immutable.js, you get more powerful data manipulation throughout the app, at the cost of having to know when and where to convert things back to plain JS. With immer, you get tightly-scoped immutable updates to objects, and inside the update function you can treat them as if they’re mutable, letting you write more idiomatic-looking JS. Instead of spreading 4 layers of objects to update one state flag, immer lets you say:
Every object reference in that path will be updated, but the rest of “state”, “state.view”, etc. will be unchanged.
If you can keep everything inside of immutable.js, it is the fastest thing out there. As soon as you have to drop back to plain JS, it gets slower. See this performance graph: https://immerjs.github.io/immer/performance/
Thanks for reminding me of this. We finally dropped IE11 support, so may be able to get some benefits from introducing immer either by itself or by bringing in Redux Toolkit.