A good micro-module removes complexity. It has one simple purpose, is tested, and you can read the code yourself in less than 30 seconds to know what's happening.
Take left-pad, for example. Super simple function, 1 minute to write, right? Yes.
The fact of the matter is: every line of code I write myself is a commitment: more to keep in mind, more to test, more to worry about.
If I can read left-pad's code in 30 seconds, know it's more likely to handle edge cases, and not have to write it myself, I'm happy.
The fault in this left-pad drama is not "people using micro-modules". The fault is in npm itself: all of this drama happened only because npm is mutable. We should focus on fixing that.
> every line of code I write myself is a commitment
That's true. However:
Every dependency you add to your project is also a commitment.
When you add a dependency, you're committing to deal with the fallout if the library you're pulling in gets stale, or gets taken over by an incompetent dev, or conflicts with something else you're using, or just plain disappears. If you add a dependency for just a few lines of code, you're making a way bigger commitment than if you'd just copy/pasted the code and maintained it yourself. That's why so many people are shaking our heads at a 17-line dependency. It's way more risk than it's worth. If you need a better stdlib for your language (some of us write PHP and feel your pain) then find one library that fills in the gaps and use that.
> If you add a dependency for just a few lines of code, you're making a way bigger commitment than if you'd just copy/pasted the code and maintained it yourself.
This is a problem with NPM, not with dependencies. With different package management systems with stable builds and lockfiles, then you pin to a specific version and there is no way upstream can cause problems. A lockfile is a pure win over vendoring.
Yes, there is. Just like NPM's left-pad case.
The owner of the package remove the package from the repository. It doesn't matter if you pin to any version if there is no longer a code to download.
The only way to prevent this is to have your own local server for third party package repository.
You're talking about the purely technical aspects of package management that keep your build from breaking. My point is that there's a lot more to it than that. Lockfiles do not keep software from requiring maintenance. Introducing a dependency means handing off a chunk of your software project to someone else to maintain. If they do it wrong, you're on the hook for it.
For example, I was a maintainer for an admin UI component of the last version of Drupal core, and we decided to pull in a query string handling library written by Ben Alman. It was a good decision, and it hasn't caused any problems. But it still meant we were trusting him to maintain part of our codebase. It was also an implicit commitment to every user of Drupal that if Ben quit maintaining that library, we would step in and fix any problems that came up. You don't get rid of that commitment with a lockfile.
Here is an oversimplified model to illustrate my basic point:
A dependency introduces some constant amount of risk (d) that does not vary with the size of the dependency. Every line of code you write yourself also introduces a much smaller constant amount of risk (y).
If you introduce a separate dependency for every line of code in a 1000-line project, your risk is 1000d.
If you can pull in someone else's code for the whole thing and don't need to write any code yourself, your risk is d.
If 200 lines of your code can be replaced with an external library, your risk is d + 800y.
I think the real disagreement here is over the value of d. My experience leads me to put the value of d pretty high relative to y, so to me 1000d is the worst possible case. If someone sees d as equal to y, then they'd see dependencies as no problem whatsoever.
(Obviously in reality the risk of a dependency is not really constant - it's probably more like d + 0.1y or d + 0.01y or whatever, since a 10-line dependency is less risky than a 1000-line dependency. Hopefully my point still stands.)
You can't escape problems by bundling specific library versions. You just get a different set of problems. When you require a specific version of a library, you're making your code incompatible with anything that requires a higher or lower version of that library. You're also assuming there will never be a security fix that requires you to update your dependency.
...you're making your code incompatible with anything that requires a higher or lower version of that library.
Actually that's not correct when using node/npm (or anything else in the ecosystem like browserify). That is one of the impressive things about this platform: any number of different versions of the same module can be required by the same app. It would be nuts to do that in your own code, but as long as the craziness is in your dependencies it really doesn't matter.
And that kind of works in a dynamic language. You could make it work in a statically-typed language, but then the problems will become more apparent. If X depends on Y and Z1 and Y depends on Z2, and Y exposes an object created by Z2 in its public API, the X developers might look at the Y api docs and try to call Z1 functions on that Z2 object! Worst of all, it might very well work in the normal cases, and the issue might not be noticed until it's in production.
Using multiple versions of the same library is code smell. It's a stability issue, a security issue, a complexity-breeder, and an absolute nightmare for packagers.
Yeah I'm sure it sucks for distro packagers. Why are they using npm, though? It's not designed for their use case.
Actually though you're just talking about some bugs in X, or possibly some design flaws in Y. Passing [EDIT, because bare objects are fine:] class instances around like that is the real code smell. So much coupling, so little cohesion. We call them "modules" because we like modularity.
It's not that packagers are using npm. It's that they might want to package a node application for their distro's package system, and now they have to sift through thousands of npm packages (already a nightmare). They can't just make a system package for every npm package, not just because that would violate packaging guidelines for any reasonable distro, but because one project can pull in multiple versions of a single package. The Nix and Guix crews can handle that (and that they can is as much of a bug as it is a feature).
There is no clean way of packaging a typical node application.
Passing class instances around like that is the real code smell.
Often, yes, but not always. Allowing fine-grained control over performance is one good reason that a library might expose class instances from one of its dependencies.
Is Node.js itself appropriate for packaging? I think maybe not. It changes really quickly, and has done for some time. Anyone coding in Node installs the particular versions she needs without regard to the distro. Most Node modules are just libraries installed in and for particular projects. There are tools written in node, but for the most part they focus on coding-related tasks that also tie them to particular projects, e.g. beefy or gulp. There's no need to install such tools above the project level, and certainly no reason to install them on the system level.
A distro that still packages python 2 (i.e. all of them) has a particular "velocity", and therefore it has no business packaging Node or anything written in it. Maybe a distro could package TJ's "n" tool (helpfully, that's written in bash rather than node), which would actually be handy for distro users who also use Node, but that's it.
I'm not talking about packaging node libraries for developers. No node developers are going to use system packages to install their libraries. What I mean is packaging applications written in node for end users.
For example, you can install Wordpress on Arch with `pacman -S wordpress' and you'll have a managed wordpress installation in /usr/share/webapps/wordpress. Then you just edit some wordpress config files, set up your http server to serve php from that directory, and you have a wordpress blog.
It would be nice to be able to do the same with Ghost.
Ghost may be a special case. I wasn't familiar with it, but I just attempted to install in an empty directory without success. The first time I ran "npm i ghost", with node v5.9, it went into an infinite loop creating deeper and deeper ".staging/ghost-abc123/node_modules/" sub-directories of node_modules, which seems an... odd thing to do. After killing that, I noticed that they recommend Node LTS. Fair enough. I ran "sudo n lts", then "npm i ghost" again. This time, I didn't have to kill it because the preinstall script errored out. Based on the log, this script is installing semver, then requiring a file that can't possibly exist at preinstall time. Both of those are obnoxious, but at least it's possible to install semver.
I'm sure if I look hard enough there are some silly idiosyncratic steps one might take to install this module. Suffice it to say that it's not installing the "npm way", so it's misguided to blame npm for packaging difficulties.
More generally, I can certainly understand distro packagers' refusal to recreate towering pyramids of node dependencies in their own package system. Some lines have to be drawn somewhere, and "major" modules must bundle many of their dependencies when packaged for distros. If module maintainers don't do this, and distro packagers can't, then the modules can't be packaged.
Or you could have all the bugs introduced in everyone's hand-rolled implementations of it. I'll take multiple versions of the library instead. It's much easier to track issues and submit patches to update their dependencies later.
> Or you could have all the bugs introduced in everyone's hand-rolled implementations of it.
Only one buggy implementation per project. Compare this to including the same
library in dozen different versions, because dependencies have their own
dependencies. And you can neither track the versions nor update them.
More importantly, if you only use a specific version of a library, you're opting out of literally every one of the advantages of micro-libraries that people claim they offer. Tying yourself to a single version is the same as copy-pasting the code right into your project, except it doesn't force you to look at the code and vet it.
And writing the code yourself instead of taking on a dependency solves none of these problems. Your code becomes incompatible with anything, because you wrote it yourself. And you are responsible for making any security fixes yourself.
You just get a different set of problems. When you require a specific version of a library, you're making your code incompatible with anything that requires a higher or lower version of that library. You're also assuming there will never be a security fix that requires you to update your dependency.
If there is a security fix, you should bump your dependency by hand, the other problems that you pointed out do not exist in Node (and it's about time they disappear in Java)
I'll admit I'm at least a little tarnished in my practices due to time spent in enterprises where external dependencies require 6 manager sign offs and a security team exemption, but if this were the case that you didnt want updates to the package, just that one version that worked -
If its just a few lines of code, just copy the thing into your code base? throw a comment in saying "came from xxxx" so anyone reading your code knows that it might look like a overly generic function because it is.
...and then the publisher pulls their library off npm, and another shows up and drops one of the same name in its place, with compatible version numbers (by happenstance or otherwise).
That's exactly the problem the parent comment suggests we focus on fixing. Once a library is published, npm shouldn't allow anyone to use that name even if the library is pulled.
True, but it's common to have requirements of the form "^1.0.0" (especially since this is the default of npm i --save). It's easy to publish a new version that would be installed by a project declaring a dependency in this form.
Maintaining a dependency on a library should be much less effort than maintaining 17 lines of code. If it isn't that's a deficiency in your dependency infrastructure.
If you have 100 dependencies, then that's a 100 projects you need to follow and understand the updates for. The minute your dependencies bring in their own dependencies then you start having troubles keep up or even keeping track of the updates. The 17 lines of code you pull in is in most cases a one time deal, having it in a third party library means that you need to keep track of that library for ever.
Honestly, and this is maybe just me being biased against JavaScript, then this is what happens when you pick a language fully knowing it's limited in all sort of silly ways and attempt to use it as a general purpose language. It's not that you can't, but if you need say typing that can tell you if something is an array, then maybe picking JavaScript to begin with wasn't the brightest idea.
There's a ton of libraries and hacks out there, all attempting to fix the fact that JavaScript isn't really good general purpose language. ES6 is fixing a lot of these thing, but it's a little late in the game.
I wouldn't mind so much if these micro-modules were written in a style of thoroughness; heavily commented, heavily documented with pre-conditions, post-conditions and all imaginable inputs and outputs explicitly anticipated and formally reasoned about. I don't mind over-engineering when it comes to quality assurance.
Looking at that left-pad module though - no comments, abbreviated variable names, no documentation except a readme listing the minimally intended usage examples. This is not good enough, in my opinion, to upload to a public repository with the objective that other people will use it. It is indistinguishable from something one could throw up in a couple of minutes; I certainly have no reason to believe that the future evolution of this code will conform to any "expectation" or honour any "commitment" that I might have hopefully ascribed to it.
[EDIT: I've just noticed that there are a handful of tests as well. I wouldn't exactly call it "well tested", as said elsewhere in this thread, but it's still more than I gave it credit for. Hopefully my general point still stands.]
The benefits of reusing other people's code, to a code reuser, are supposed to be something like:
(a) It'll increase the quality of my program to reuse this code - the writer already hardened and polished this function to a greater extent than I would be bothered to do myself if I tried right now
(b) It'll save me time to reuse this code - with the support of appropriate documentation, I shouldn't need to read the code myself, yet still be able to use it correctly and safely.
Neither of those things are true for this module. It's not that the module is small, it's that it is bad.
(True that npm's mutability is a problem too - this is just a side-track.)
Completely agree here - the problem isn't micro-modules. It's partly just a lacking standard library for javascript and largely just exposing issues in npm that the community was pretty ignorant of until just now.
The whole "amaga, it's a whole package for just ten lines of code" is just elitism. Given the number of downloads on things like left-pad, it's clearly useful code.
Agreed as well. In fact, I would posit that this wasn't even really a problem until npm@3 came out and made installing dependencies far, far slower. Yet it was necessary; a standard project using babel + webpack installs nearly 300MB (!!!) of dependencies under npm@2, and about 120MB under npm@3. Both are unacceptable, but at least npm3 helps.
1) JS is unique in that it is delivered over the wire, so there is a benefit in having micro-modules instead of a bigger "string helpers" module. Things like webpack are changing that now (you can require lodash, and use only lodash.padStart).
2) JS's "standard" library is so small, because it's the intersection of all of the browser implementations of JS dating as far back as you care to support. As pointed out in sibling, a proposal for padLeft is included in ECMA2016. But we'll still need Left-Pad for years after it's adopted.
Point 1 was addressed years ago by Google Closure Compiler, which used "dead code elimination".
Also, the Haxe language, which compiles to JS, has DCE.
Micro-modules is certainly a solution if you don't want to use pre-processing or compilers. So is copy/pasting, or manually writing your own util libs, which seems safer than relying on third parties to provide one-liners for you.
Eh, to date, a large part of the JS community still recommends including .js files in script tags in HTML. So, while this has been possible for a while, there hasn't been widespread adoption.
I'm not sure dead code elimination works in that situation. Consider:
var obj = {
a: ...
b: ...
c: ...
}
If a, b, and c are functions, there is not necessarily a way to determine at compile time whether they will be used at runtime.
var prop = webrequest();
obj[prop]();
In that scenario, a, b, and c cannot be eliminated. But it would be worth testing Google Closure Compiler to see what it does in what scenarios.
I've heard ES6 modules solve this problem, but it seems like dynamic access to an ES6 module might still be possible, which would cause the same problems for DCE. Perhaps no one writes code that way, so it doesn't necessarily matter. But what about eval?
There are lots of tricky corner cases. It seems better to use small atoms than a monolithic package.
In simple mode, Closure Compiler would rename the local variable "prop" but not alter the properties.
In advanced mode, Closure Compiler would globally rename properties and statically eliminate dead code. In your example, it would remove a, b, and c and break the dynamic invocation.
This behavior is all outlined in Closure's documentation with examples.
Im not really sure how this is an argument against DCE. If theres no way to tell at compile time if these functions will be used, then you have to include them in the final delivered code, whether or not you are using monolithic or micro packages or dead code elimination.
DCE will help you if you pull in huge-monolithic-package and only use one function from it. In that case its basically the same as if you had used the micro package.
> 1) JS is unique in that it is delivered over the wire, so there is a benefit in having micro-modules instead of a bigger "string helpers" module. Things like webpack are changing that now (you can require lodash, and use only lodash.padStart).
JS isn't even remotely unique in this regard, almost every static language has had dead code removal for decades.
I agree with you, but they weren't saying that JS is unique for having dead-code elimination, but rather for having to be served over a network connection nearly every time it's needed (not accounting for caching and offline apps, etc), which presents an entirely new set of problems that webpack and others attempt to solve.
Which is why it's so unsuited for writing non in-browser applications. JS was made for a purpose. We hacked it and discovered (or at least made popular) the benefit of having a fully async STD lib, of using callbacks to manage incoming connection and so on.
The wise course of action would be to take those very good ideas and bake them in languages designed for web/system programming, and keep improving JS for in-browser tasks.
A in-browser web application requires the left-pad module. Should it include the 500kb library that includes a left-pad function, or import just the 10kb version, which having left-pad function as a module by itself allows?
Yes you can use left-pad module in a browser application using the npm install infrastructure.
No it doesn't. It comes with template strings. Any sprintf type functionality requires you to call functions on the input. It is a tiny tiny step forward.
Uh, no Left Padding is NOT built-in in JavaScript. The proposal to add `String.prototype.padLeft()` was just added to ECMAScript 2016.
JavaScript had a very minimal standard library, it's pretty asinine of you to compare it to C or any other language with a pretty extensive standard library.
C didn't get a standard (or a standard library) until 1989. It had been around for 17 years at that point. Two years after its invention, JavaScript was standardized in 1997. That's almost twenty years ago.
But alas, here we are, talking about the JavaScript language and it's ecosystem.
It's easy to say "I don't see why there's a need for an 11 line module to pad left on a string" when your language of choice has a robust standard library with such abilities built in.
Wow, I feel like I could have written this. Back when I used Python, I had a folder full of functions I would copy-paste between my projects. (And maybe some of the projects contained unit-tests for the function. I didn't always keep those tests in sync.) Updating them was a pain because inevitably each one would get slightly modified over time in each project separately. Eventually, I bundled all of them into a bundle of completely unrelated utility functions in a folder on my computer somewhere, and I would import the folder with an absolute path. Sharing the code I wrote was a pain because of how much it referenced files on my local computer outside of the project. I never considered publishing my utility module because all of the stuff was completely unrelated. I'd rather publish nothing than a horrifying random amalgram that no single project of mine was even related to all of the subject matter present in it.
With npm and the popularity of small modules, it was obvious that I could just cheaply publish each of my utility functions as separate modules. Some of them are about a few dozen lines, but have hundreds of lines of tests and have had significant bugfixes that I am very happy that I haven't had to manually port to dozens of projects. I don't miss copy-pasting code across projects, no matter how many claim I've "forgotten how to program".
There is something about JavaScript that makes people go a little crazy both for and against it.
I've never seen so many programmers advocate copy/pasting code before...
But regardless of how many insults get thrown around, or how many people seem to think JS is useless or that it's a horrible language, its probably my favorite (and I've done professional work in non-trivial applications from C and C++, to Java and go, to python, Ruby, and PHP to BusinessBasic and even some lisp).
I'm going to keep writing stuff in JS, and I'm going to keep loving it. Regardless of how many people are telling me I'm wrong.
I'm very hesitant to answer this, as i know it will bring on angry comments and people telling me i'm wrong, but i'll give it a shot (this is all literally off the top of my head right now, so if you are going to poke holes in it, cut me some slack)
This got a lot bigger than i thought, so strap in!
* The lack of "private" anything. This sounds like a bad idea, but I firmly believe it was a major reason for JS's success. The ability to "monkey patch" anything including built-in functions and other libraries means that everything is extendible. It isn't something i do very often (mucking around with internals of another module/system) but when i do it's really fun and generally solves a problem that otherwise would be unsolvable.
* The debugging. Oh the debugging! It's magnitudes better than anything i've ever used before. And i don't just mean in features (i know that other langs have feature X that JS doesn't have, or can do Y better). I can use multiple debuggers, inspect EVERYTHING, breakpoints, live inline code editing, remote-debugging (on pretty much every mobile device), world-class profiling tools with memory usage, cpu usage, JIT performance, optimizations/deoptimizations, etc... Hell Edge is even getting "time travel debugging" where i can step BACKWARDS in code, edit it in place, then play it forward again! Also, sourcemaps! I can compile python/coffeescript/typescript/javascript to javascript and then minify it and combine multiple files, but when i open the debugger i see my source files, with their full filenames, and the execution is synced statement-by-statement. And they work for CSS too! And I almost forgot about the best part. Since they can be separate files, i can include them in my production build with literally 0 overhead. So if there are any problems with the live build, i can open the debugger and have the full development-mode first-class debugging experience, on the live site, even on a user's PC if i need to. Hell i can even edit the code in-place to see if my fix works! This one is probably one of my favorite features of javascript and it's ecosystem.
* async programming. Yeah, i know other languages have it, but JS is the first time where i would consider it a "first class citizen" Everything is async, it's amazing, and it's to the point that if something isn't async, it's almost a bug. And this combined with the event system and the single-threaded-ness means writing performant code is more "straightforward" than i've experienced in other languages. Combine this with web-workers (or threads in the node ecosystem) and you get the best of both worlds.
* the mix of functional and OOP programming. Functional programming sucks for some things, OOP sucks for others. I feel like in practice JS lets me use the best of both. Yeah, it's not "pure" or "proper", yeah you can use the worst of both, but i love it. You can add the mix of immutable vs mutable in this as well. By having both, it lets me choose which i want to work with for the current problem, even switching in a single project.
* it's fast. JS is pretty fucking fast in the grand scheme of things. Yeah, it's not C, but with typed arrays and some profiling (which JS makes oh so easy!) it's possible to wipe the floor with Python, Ruby, PHP, and can even give Java and Go a run for their money. For such a dynamic language, that's impressive.
* the compilation options. From coffeescript/typescript/flow, to just compiling between js "dialects", and adding non-standard (or extremely new) features to the language is "easy". It took me a little while to get used to being that disconnected from the final output, but once i "let go" of that, i found i loved it. With babel plugins i can add extra tooling, or extra type-checking, or even completely non-standard stuff like JSX or automatic optimizations into the code that i output. Combined with some good tooling i can even change how the code executes based on the "build" i'm generating (for example, i have some babel plugins that optimize react elements to improve first-load speed, but i only run it on staging/production builds because it is pretty verbose (which gets removed when gzipped) and is difficult to debug)
* the tooling. auto-refresh, hot-module replacement, automated testing, integration testing, beautiful output in multiple formats, linting, minifying, compressing, optimizing and more task runners than you'll ever need. The fact that i can write code, save, and have that code hot-replace the code currently running in my page on my PC, tablet, phone, laptop, and VM all at the same time. There is nothing that even comes close to this. At all.
* and i guess finally, npm. The fact that there are 5+ different modules for everything i could ever want. The fact that i can choose to install a 3-line program, or write it myself, or install it first and write it myself later, or vice versa. The fact that i can choose a module optimized for speed, or one for size. The fact that i can get a pure-js bcrypt and a natively-compiled bcrypt with the exact same API and install the native and if that fails fallback to pure-js. The fact that NPM installs are so effortless that i have a project with about 100 direct dependencies (and most likely about 1000 when it's all said and done), and there isn't even a hint of a problem is wonderful (this is a bit of an edge case though, most of the packages installed here are "plugins" like babel plugins, postcss plugins, and i'm purposely avoiding bundled deps kind of for shits-n-giggles.) And no matter how many internet commenters keep telling me i'm wrong, i haven't had any issues with it.
This got a lot bigger than i had intended, but the point is that while JS might not do any one thing very well, it does many things pretty damn well. And the benefits far outweigh the downsides for me.
I'm going to bed for the night, so if you reply don't expect an instant reply, but despite the "standoffish" nature of a lot of this, I want to hear responses.
Thank you for taking the time to write this out. This is probably the best description I have seen of why it is enjoyable to write JavaScript. I myself have been programming professionally for 15 years, writing C, C++, Scheme, Java, Rust, PHP, Python, Ruby, shell script, etc. I actually enjoy C and Rust, have a tremendous respect for Scheme, Clojure, Haskell, Scala, ML, etc., yet I always reach for Node and JavaScript because of the great JIT, instant startup time, great debugging tools, and ease of use of npm.
To add to the part about extensibility, many times I have jumped into a "node debug" session, in my own code or in 3rd party modules installed to node_modules. Many times I have added breakpoints or tweaked 3rd party code right in node_modules, or monkey-patched methods temporarily. This kind of thing is often nearly impossible, very time consuming, or just plain difficult to do in other languages.
Interestingly to me, a lot of your points apply to my own favourite language, Lisp.
Regarding a lack of private anything, it's possible to monkey-patch any Lisp package or class however one wants. And of course, one can get true privacy in JavaScript if one wants, by using closures — the same trick applies in Common Lisp.
Lisp debugging is great: one can set up all sorts of condition handlers and restarts, and invoke them from the debugger.
Lisp is a great blend of imperative, functional & object-oriented programming styles, enabling you to use the right tool for the problem at hand.
Lisp is incredibly fast, faster than C & C++ in a few cases and almost always Fast Enough™. It's dynamic, but inner loops can be made very static for performance. There's even a standard way to trade off safety and performance, if that's what's important in a particular case.
I don't know for certain, but I believe that Lisp was the language that invented hot-patching (well, I suppose one could always have done it from assembler …). It was even used to debug a problem on a NASA probe[0]: 'Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem.' As with JavaScript, the Lisp debugger is part of the standard and is always available. This can be profoundly useful.
And Quicklisp is a nice, modern way to pull down numerous libraries.
Lisp does of course support async programming since function are first-class, although I'm not personally as much of a fan as you are. Generally, I think that callback hell should generally be avoided.
I'm not aware of a lot of compile-to-Lisp projects, but given that the language is great at treating its code as data, it's an excellent target.
It certainly doesn't have the huge ecosystem that JavaScript does, but that improves with every developer who starts a project.
I really, really wish more folks would take a look at it. The more I use it, the more I realise that a 22-year-old standard has well-thought-out solutions to problems that people still face in other language environments today.
The only thing i have to add is that while callback hell sucks, there have been some pretty recent (in the grand scheme of things) additions to the async programming field. Async/await is beautiful, and it has made me fall in love with async programming all over again.
I meant that more that I have multiple options to choose from.
Multiple browsers means there are multiple competing sets of debugging tools, each are good at some things and worse at others. For example, Firefox was among the first to be able to properly debug promises, while chrome still let it swallow unhandled errors.
After writing that, i think the reason i love working with JS is because of the choice. A lot of that choice isn't necessarily because of the language (you could easily have that debugging experience in other languages), but it's currently in javascript.
Not sure I agree with some of your points but you most likely have used Javascript more than I since most of my professional experience is with enterprise Java. Let me try to show you how I see the programming world through my Java-colored glasses :P
"The lack of 'private' anything" - Just hearing this caused immediate revulsion and I am sorry to say that. Where I come from, the best practice is to try to keep everything closed from modification but still open enough for extension: http://www.cs.utexas.edu/users/downing/papers/OCP.pdf
"The debugging" - The features you mentioned are all available in the Java ecosystem as well. It is a very mature ecosystem with great IDEs, debuggers, performance testers, etc. The step-back and edit feature of debugging has been around for awhile now. Heck, you can even write your own debugger fairly easily due to good support of other tools in the ecosystem.
"async programming" - Not sure what you mean by "first-class citizen", but asynchronous programming can also be done with Java as well. Callbacks and futures are used widely (at least where I work). But even better: Java is multi-threaded. What happens to the Node.js server if a thread is getting bogged down?
"the mix of functional and OOP" - I admit I have no experience with functional programming so I can't say anything about mixing the two paradigms together. But I have seen OOP with Javascript and frankly, it is confusing and unwieldy. I don't even think the concept of class as an object blueprint exists. How do you even create a class that inherits the properties of another Javascript class? It is one of the basic ideas of OOP but I don't think Javascript supports it. From my brief time with it, it really looks like you only have simple objects with properties and methods which can be set up off of a prototype but that's it.
"the compilation options" - I'm assuming you're are talking about transpilers and yes, I've been noticing more transpilers that target Javascript. I honestly don't know why one would want to do that though. It just seems an unnecessary layer. Why not just directly write Javascript code? Is the Javascript syntax so bad that you want to code in pseudo-Ruby (Coffeescript)? :)
"the tooling" - Hot swapping, test frameworks, linting, optimizing, ...these are also available in the Java ecosystem and have been for quite some time now. Notice I didn't mention auto-refresh, minifying, and compressing since I am not sure what exactly those are and I don't think they apply to compiled languages.
"npm" - The available libraries in the Java ecosystem is vast and a great number of them have been developed and iterated upon by some of the best engineers and computer scientists in the past ~20 years. And the Java libraries do not seem to have the problems that npm is suffering at the moment :P
Per privacy, i had the same reaction at first. But at one point i had a bit of a realization that i'm protecting my code from being "used" incorrectly. Me writing extremely private code and only allowing what i want to be used externally is not going to make my code any more "secure", it's going to make sure other people don't misuse it. With that in mind, i've found that documentation and comments provide the exact same assurances, while still allowing someone to go and poke around in your internal code if they need to (and they are willing to take on the possibility that the code will change out from under them). It's almost always a "last resort" thing, but without this the web wouldn't have grown at the pace it did. This is what allowed polyfills, this is what allows me to patch in old software versions, or what allows me to in 3 lines modify a library to work with another library (literally just last week i found that a storage library had a nearly identical API to another library except for one function was a different name. Because of "nothing private", i was able to hang a new function on the library's object at runtime and re-direct it to the original function name, meaning the storage lib was now compatable with about 3 added lines, it's a pretty weak example of this, but it's the most recent one i can think of).
per the debugging, i might have to take another look at this. I hadn't realized that it was that nice!
per async, yeah java can do async programming, but in js you MUST do async programming. Because of the single-threaded nature, if you aren't async you are blocking which ruins the performance instantly. This means that every library, function, and module is built for async from the start. "What happens to the Node.js server if a thread is getting bogged down?", it runs slowly or in some cases not at all. Yeah, that sucks, but this constraint forced better and more widely available async code. Plus if you really need multiple threads you can have them (webworkers on the web, threads on node), but you need to control them absolutely. It's more work, but it's the same outcome. I'd prefer this to be different, but trying to bring multi-threaded execution to javascript is like trying to stop a tornado with your bare hands...
per functional/oop, JS's OOP is lacking (or was, recently with ES6 it's gotten MUCH better). Now you can inherit from another class, now you can use a class as an object blueprint. There's still work to be done here, but it's getting better. That being said, there are ways to solve those same problems, but they are functional. And even though we are getting inheritance, it's largely an anti-pattern (at least to me). Composition is almost always prefered. Going back to my monkey-patched library from above, i was able to modify the object at runtime to add more functions and to "redirect" others. In something like Java i'd need to wrap that class in another class and add what i want there. It's largely the same result when used alone, but when you combine that, with a polyfill that's fixing something in all objects, and with a plugin for that library (where the lib wasn't meant to have plugins in the first place), in Java land you quickly end up with a big mess, while in JS land you can keep hanging more and more crap on an object if you want at runtime. And because the language was "meant" to be used this way, everyone expects and guards against it.
per speed. It's not as fast as java/go 90% of the time, but there are times where it can be. Yeah, they are the minority, but i was more just trying to dispel the myth that JS is dog slow. It's impressively fast these days. My favorite set of benchmarks to show off is the asm.js suite from "arewefastyet"[1]. It's more for comparing JS engines against each other, but there is a "Native C++" compiled version of the same code and they are comparing their JS against that.
The compilation options. It seems like an unnecessary layer, but in practice it's not as bad as many make it out to be. You still need to know JS to do it, so you are doubling the amount of languages you need to know to work on the project, but it does allow for some cool stuff. Typescript (microsoft's strong-er typed javascript/C# hybrid compile-to-js language) actually pushed a bunch of features into the newest version of javascript. Coffeescript did as well. These compile-to-js langs are partly a symptom of a problem, and by their nature they let people "solve" the problem now and when it gets fixed they can migrate back to "pure" js if they want. Also, it's this "transpiling" that lets things like React's JSX to exist. Adding entirely new, unrelated to javascript itself, parts to the language. JSX allows you to embed HTML directly into JS and it's basically a wrapper around the "document.createElement" function (in reality it's MUCH more, but that's the gist of it). It's really strange if you haven't used it before, but it's extremely powerful. And it could be done in other languages (and it is, look at go's generate command), but it's already here in js, and i love it!
the tooling, java is probably the only other one in my "list of languages" that is on the same level as JS in terms of tooling. The problem i have with Java's tooling is they tend to be built into IDEs instead of standalone tools. So that means i'm married to my IDE if i want those features. In JS land for the most part they are standalone and can be used with many different editors. This is a pain-point at my work currently as we are switching from a Business basic stack that marries you to the IDE and everyone wants different things in their next editor. It's a small problem though in the grand scheme of things, but it's a bit of a pet-peeve of mine.
and npm. Maven is great, but it just doesn't have the same number of options that something like NPM has. I know this isn't the languages fault (the best package manager doesn't mean shit if there are no packages), but it's a pain point. Many people seem to think of having "too many options" as a problem, but I think i'm spoiled by it now. If i want a lib to handle authentication in my app, i have 5 or more major choices. They all have their upsides and downsides, they are all made for different use cases. In something like the java world i just haven't seen that amount of choice. The only other one that comes close is surprisingly go. I really think the number of packages stems from ease of publishing, and npm (and go) have that down pat. I also think that this comes down to the languages being more oriented towards different things.
I appreciate the comment, and i'm in no way trying to say that JS is the only one that has these things (not that it sounded like you were implying it), but when combined, it makes for a nice experience.
> But regardless of how many insults get thrown around, or how many people seem to think JS is useless or that it's a horrible language, its probably my favorite (and I've done professional work in non-trivial applications from C and C++, to Java and go, to python, Ruby, and PHP to BusinessBasic and even some lisp).
I see one common thread between all those languages you list: none of them has a decent type system.
If you ever get the chance I'd strongly recommend trying a small project in a strongly-typed functional language - PureScript if you're targeting the browser, otherwise Haskell or Scala, or maybe F# or OCaml. (Scala and OCaml also have compile-to-JS options). If you find you don't like that style then fair enough, but it's well worth trying one language in that space and getting a sense of the techniques that it enables - it will make you a better programmer even if you end up going back to JavaScript or another language from your list.
I've actually played with OCaml a bit, and Haskell a bit less. The problem is that I don't know what "problems to solve" with them, and there is no way I'm going to use something like that at work, so I kind of run out of steam before I really get into it.
I might shoot for Scala next time. We don't use Java anywhere at my current job, but I might play around with it in a personal project for a while.
I really like the functional style, and I can see how strong typing works REALLY well with it, but I've already found that it's pretty hard to bring other devs up to speed on it. And that really limits where I use it.
If you like Javascript and you want to try a language with a good static type system, you might like Elm (http://elm-lang.org/). As a bonus, it has fantastic documentation and examples of small in-browser projets -- a clock, Pong, and so on.
My dependency strategy over time has moved towards more static, project-owns-everything behavior, and specifically "one synchronization script per dependency" - including my own utilities library. The script leaves a echo of the timestamp so that I can also see when it was updated.
That way, different projects can have different versions of everything, and system environment is only important to the synchronization step - trivial to fix up if needed, trivial to copy between machines.
What I see is that a module has a non-zero overhead in complexity in itself. That is, ten 10 line modules and twenty 5 line modules do not yield the same complexity. The modules themselves have a complexity overhead associated, and submodules have their own complexity overhead associated, albeit smaller than first party modules. That complexity is easily seen from the recent situation of unpublishing modules, which resulting in modules multiple steps removed having problems building.
So, when I read "It doesn't matter if the module is one line or hundreds." I call bullshit. There is overhead, it's usually just fairly small (at may event begin to rival the gains from using a module at that level), but that small amount adds up. Once you've had to deal with a dependency graph that's 10 levels deep and contains hundreds or thousands of modules, that small extra complexity imposed by using a module is no longer in total, and comes at a real cost, as we've just seen.
Other module ecosystems have gone through some of the same problems. There was a movement in Perl/CPAN a few years back to supply smaller, more tightly focused modules a while back, to combat the sprawling dependencies that were popping up. The module names were generally suffixed with "Tiny"[1] and the goals where multiple:
- Where possible, clean up APIs where consensus had generally been built over what the most convenient usage idioms were.
- Try to eliminate or reduce non-core dependencies where possible.
- Try to keep the modules themselves and their scope fairly small.
- Remove features in comparison to the "everything included" competitor modules.
This has yielded quite a few very useful and strong modules that are commonly includes in any project. They aren't always tiny, but they attack their problem space efficiently and concisely. Even so, I'm not sure there's ever a module that's a single line of code (or less than 10, given the required statements to namespace, etc), as the point is to serve a problem, not an action.
It doesn't handle edge cases, it doesn't perform well and it isn't well tested. There is also no documentation. Obviously 30 seconds wasn't enough for you to verify anything at all about this module (namely that it's complete garbage).
And just because some random guy didn't get something as trivial as this right the first time, doesn't mean nobody else can. Also the de facto standard library lodash already has padding utilities, made by people who have a proven track record.
I don't agree with the explosion of micro-modules. There's a reason the vast majority of languages doesn't have them, at least not at function level.
IMO in the Javascript world they're only there in order to minimize script size for front end work. See lodash & lodash-something1 / lodash-something2 / ..., where there's an option of using the whole module or just including 1-function long scripts, precisely to avoid the script size issue.
Is there a solution for this? I know that the Google Closure compiler can remove dead code, ergo making inclusion of large modules less costly in terms of code size. Am I missing some ES6 feature that also helps with this?
You're just trading in the complexity of the code you'd have to write for the delayed complexity of dealing with dependency issues down the line. It's a waste of a trade off for tiny things like this.
I agree with the points you've made, but I would also posit that adding a dependency using this mutable package manager is making a commitment to maintain the integrity of that dependency, which is arguably more work than maintaining the handful of lines of code.
A good micro-module removes complexity. It has one simple purpose, is tested, and you can read the code yourself in less than 30 seconds to know what's happening.
Take left-pad, for example. Super simple function, 1 minute to write, right? Yes.
But check out this PR that fixes an edge case: https://github.com/azer/left-pad/pull/1
The fact of the matter is: every line of code I write myself is a commitment: more to keep in mind, more to test, more to worry about.
If I can read left-pad's code in 30 seconds, know it's more likely to handle edge cases, and not have to write it myself, I'm happy.
The fault in this left-pad drama is not "people using micro-modules". The fault is in npm itself: all of this drama happened only because npm is mutable. We should focus on fixing that.