Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes.

This hits so hard. cough dynamic typing enthusiasts and vibe coders cough



> Another thing we can learn from the past is the failure of characterizations like "Computing Science is really nothing but X", where for X you may substitute your favourite discipline, such as numerical analysis, electrical engineering, automata theory, queuing theory, lambda calculus, discrete mathematics or proof theory. I mention this because of the current trend to equate computing science with constructive type theory or with category theory.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD12xx/EWD124...


I’m not sure that is the focus of most serious dynamic language. For me, it’s the polymorphism and code re-use it enables that the popular static languages generally aren’t close to catching up to.


I’m curious, can you give an example that wouldn’t be solved by polymorphism in a modern statically typed OO language? I would generally expect that for most cases the introduction of an interface solves for this.

Most examples I can think of would be things like “this method M expects type X” but I can throw in type Y that happens to implement the same properties/fields/methods/whatever that M will use. And this is a really convenient thing for dynamic languages. A static language proponent would call this an obvious bug waiting to happen in production when the version of M gets updated and breaks the unspecified contract Y was trying to implement, though.


That’s basically the main example I’d give. I think the static proponents with that opinion are a little myopic. Those sorts of relationships could generally be statically checked, it’s just that most languages don’t allow for it because it doesn’t fit in the OOP/inheritance paradigm. C++ concepts seem to already do this.

The “bug waiting to happen” attitude kind of sucks, too. It’s a good thing if your code can be used in ways you don’t originally expect. This sort of mindset is the same trap that inheritance proponents fall into. If you try to guess every way your code will ever be used, you will waste a ton of time making interfaces that are never used and inevitably miss interfaces people actually want to use.


> The “bug waiting to happen” attitude kind of sucks, too. It’s a good thing if your code can be used in ways you don’t originally expect.

Rather than call it myopic I would say this is a hard won insight. Dynamic binding tends to be a bug farm. I get enough of this with server to server calls and weakly specified JSON contracts. I don’t need to turn stable libraries into time bombs by passing in types that look like what they might expect but aren’t really.

> If you try to guess every way your code will ever be used

It’s not about guessing every way your code could be used. It’s about being explicit about what your code expects.

If I’m stuffing aome type into a library that expects a different type, I don’t really know what the library requires and the library certainly doesn’t know what my type actually supports. There’s a lot of finger crossing and hoping it works, and that it continues to work when my code or the library code changes.


> Rather than call it myopic I would say this is a hard won insight. Dynamic binding tends to be a bug farm.

I run into typing issues rarely. Almost always the typing issues are the most trivial to fix, too. Virtually all of my debugging time is spent on logic errors.

> It’s not about guessing every way your code could be used. It’s about being explicit about what your code expects.

This is not my experience in large OOP code bases. It’s common for devs to add many unused or nearly unused (<3 uses) interfaces while also requiring substantial refactors to add minor features.

I think what’s missed in these discussions is massive silent incompatibility between libraries in static languages. It’s especially evident in numerical libraries where there are highly siloed ecosystems. It’s not an accident that Python is so popular for this. All of the interfaces are written in Python. Even the underlying C code isn’t in C because of static safety. I don’t think of any of that is accident. If the community started over today, I’m guessing it would instead rely on JITs with type inference. I think designing interfaces in decentralized open source software development is hard. It’s even harder when some library effectively must solely own an interface, and the static typing requires tons of adapters/glue for interop.


> Almost always the typing issues are the most trivial to fix, too.

For sure. My issue is with the ones I find in production. Trivial to fix doesn’t change the fact that it shipped to customers. The chances of this increases as the product size grows.

> It’s common for devs to add many unused or nearly unused (<3 uses) interfaces while also requiring substantial refactors to add minor features.

I’ve seen some of this, too. The InterfaceNoOneUses thing is lame. I think this is an educational problem and a sign of a junior dev who doesn’t understand why and when interfaces are useful.

I will say that some modern practices like dependency injection do increase this. You end up with WidgetMaker and IWidgetMaker and WidgetMakerMock so that you can inject the fake thing into WidgetConsumer for testing. This can be annoying. I generally consider it a good trade off because of the testing it enables (along with actually injecting different implementations in different contexts).

> I think what’s missed in these discussions is massive silent incompatibility between libraries in static languages.

What do you mean by this?

> It’s especially evident in numerical libraries where there are highly siloed ecosystems. It’s not an accident that Python is so popular for this. All of the interfaces are written in Python.

Are we talking about NumPy here and libraries like CuPy being drop-in replacements? This is no different in the statically typed world. If you intentionally make your library a drop in replacement it can be. If you don’t, it won’t be.

I am not personally involved in any numeric computing, so my opinions are mostly conjecture, but I assume that a key reason python is popular is that a ton of numeric code is not needed long term. Long term support doesn’t matter much if 99% of your work is short term in nature.


It is just a classical Dijkstra strawman - hiding a weak argument behind painting everybody else as idiots. In fact it is much easier to make dangerous undetected mistakes in C than it is in Python.


I downvoted you. First of all, your explanation of what "classical Dijkstra strawman" is lacks the substantiation. Second, your statement about C vs Python is a sort of strawman itself in the context of static vs dynamic typing. You should compare Python with things like Java, Rust or Haskell. (Or C with dynamic languages from similar era - LISP, Rexx, Forth etc.)


It is a strawman because nobody actually equates the ease of programming with the ease of making undetected mistakes.


Presumably no one thinks "I love using [dynamically-typed language] because I can make mistakes easier", but on the other hand, isn't it the case that large codebases are written with low initial friction but high future maintenance?


So you agree it is a strawman?


Perhaps Dijkstra was going for the former, but is it bad to consider a stronger argument along the lines of what he said?


A charitable interpretation would be he critizises e.g JavaScripts silent type coercion which can hide silly mistakes, compared to e.g Python which will generally throw an error in case of incompatible types.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: