> It explains how natural language is a very poor way to express programs, and how it held back science and progress for many centuries.
I'm not interested in using natural language to implement the software (write code). I'm interested in using natural/technical language to create an ontology for the architecture. This is where things get gray.
When I think of an architecture, I think of something that evolves over time. I think of architecture as a tool to facilitate communication between programmers, designers, project management, domain experts, & users.
This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.
I see this fuzziness as an accurate model of the conceptual domain, which is ultimately based on the understanding of multiple humans. This understand is fuzzy and heavily dependent on context. And yet, the ontology attempts to coral this fuzziness into more strongly defined concepts, which map to the implementation. The implementation should not be fuzzy at all.
> Other invariants are similar. Instead of documenting them or keeping them in your head, you let your compiler worry about them.
I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect. When there is such proof, I correct the system. In environments that support rapid feedback & deployment, like the web, this works well. In environments that don't support rapid deployment, iteration, and where lives are at stake, not so well.
> But Go is extremely inflexible. It has a very primitive set of tools to do everything, clumsily. A language like Haskell, for example, lets you have Go-like coroutines and channels. But it also lets you program with software transactional memory, or use other parallelism constructs.
That sounds good to me.
> Also, the inability to specify invariants of your program isn't flexibility.
That mostly sounds good. I would want invariants to be optional, which sounds like is the case.
> I usually use guard clauses to protect against nulls.
An interesting (to me) insight was that in a language with a flexible type system, the types are effectively just a set of assertions at the start and return of every function, that say that the inputs and outputs have certain properties. With a compact syntax and zero runtime overhead, which is nice.
I do think that some strongly typed languages make it too difficult to step outside the type system; in Scala I have to do something like (x.asInstanceOf[{def foo(): String}]).foo() whereas in Python I can just write x.foo(). But once I started seeing the type system not as a fixed piece of the language but as a framework for encoding my own assertions, it became useful enough that I can't stand to live without it.
It's much more than just at the start end return of every function. But OTOH, it's much less than assertions, since they're restricted to a subset that can be proven (usually automatically).
Note the Scala verbosity here is Scala-specific. In HM-style languages, type inference works much better and you don't have to do such things. You might still have to explicitly "lift" a value from one type to another (e.g: Wrap a value in "Just", or use "lift"), but that's a much more minor price to pay.
> I do think that some strongly typed languages make it too difficult to step outside the type system; in Scala I have to do something like (x.asInstanceOf[{def foo(): String}]).foo() whereas in Python I can just write x.foo()
Have to make an explicit cast with a structural type? Surely you can do better, like, say, a trait.
> I'm talking about casting, calling a method that the type system doesn't know is present.
I think a larger example is necessary to see how you ended up in such a situation, but I suppose it's off-topic...
> This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.
So what does this have to do with Go or type systems?
> I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect.
And that's a bad thing. There are better tools for this job. What's the downside of encoding nullability into the types?
> That mostly sounds good. I would want invariants to be optional, which sounds like is the case.
Every good type system lets you "opt out".
Therefore, it is a bit silly to look at dynamic typing, where you cannot "opt in", as more flexible.
> This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.
> So what does this have to do with Go or type systems?
I've found that type systems, that aren't utilizing Duck Typing, as being restrictive & causing incidental complexity when evolving the design. I don't really care if something is a categorization of something else. I usually (> 98% of the time) only care if that something adheres to an interface.
I don't like to label people in life either :-)
> I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect.
>And that's a bad thing. There are better tools for this job. What's the downside of encoding nullability into the types?
For the web, it's not that bad. The design is constantly evolving, so most of the development period is spent completing something that is not finished.
There's no downside in encoding nullability, unless extra syntax & incidental complexity is added. It's not a big problem for me so I'd rather not have to do extra work for this feature.
> That mostly sounds good. I would want invariants to be optional, which sounds like is the case.
> Every good type system lets you "opt out".
Explicitly or implicitly? I'd rather be opted out by default and opt in when I want to. Again, I don't want to do extra work or have incidental complexity.
> I usually (> 98% of the time) only care if that something adheres to an interface.
Well, whether something is "nil" or not definitely matters to whether it adheres to the interface, doesn't it?
> There's no downside in encoding nullability, unless extra syntax & incidental complexity is added
You just need sum types and pattern-matching, which are a straight-forward addition to the language -- and very fundamental to computation, so not quite "incidental complexity".
> I'd rather be opted out by default and opt in when I want to
Then why do you use Go and not a fully dynamically typed language? In Go you opt out explicitly with "interface {}".
Why would you rather opt in? Opt out makes sense because types are so cheap 99% of the time.
> I usually (> 98% of the time) only care if that something adheres to an interface.
>Well, whether something is "nil" or not definitely matters to whether it adheres to the interface, doesn't it?
True. Though we are discussing nil/null as being a potential state of data. I actually like & utilize Javascript's notion of falsy (false, "", undefined, null, 0). It's not precise, but most of the time, precision is not needed. Just the general notion that there is a value to operate on or not. Optimizing toward brevity supersedes precision in many cases.
> I'd rather be opted out by default and opt in when I want to
> Then why do you use Go and not a fully dynamically typed language? In Go you opt out explicitly with "interface {}".
I mostly do use dynamic languages. Though, the notion of the Go interface makes the api explicit, yet remains decoupled from the rest of the type system, which seems ok. I've seen proponents use these interfaces to later "discover" types.
> Why would you rather opt in? Opt out makes sense because types are so cheap 99% of the time.
I would rather opt in if I would otherwise have to think about it every time the situation comes up.
Take a collection as an example. Most of the time, I really just want to put a bunch of objects (data) into the collection. I don't want to be a bookkeeper of what type of data is going into the collection. I trust that the data "works" with the rest of the program and will utilize other mechanisms to prove that it doesn't work. I don't want to have to fight compile errors and have to craft a type system just to put an object into a collection.
---
As a general notion, I like to evolve the design from a simple understanding to a more precise & intricate understanding. My ideal programming language would be forgiving of my initial simplistic domain understanding and facilitate the growth of precision as time goes onward.
I'm not interested in using natural language to implement the software (write code). I'm interested in using natural/technical language to create an ontology for the architecture. This is where things get gray.
When I think of an architecture, I think of something that evolves over time. I think of architecture as a tool to facilitate communication between programmers, designers, project management, domain experts, & users.
This ontology evolves over time as the software, understanding of the domain, & the domain itself changes.
I see this fuzziness as an accurate model of the conceptual domain, which is ultimately based on the understanding of multiple humans. This understand is fuzzy and heavily dependent on context. And yet, the ontology attempts to coral this fuzziness into more strongly defined concepts, which map to the implementation. The implementation should not be fuzzy at all.
> Other invariants are similar. Instead of documenting them or keeping them in your head, you let your compiler worry about them.
I usually use guard clauses to protect against nulls. I rely on my tests & production monitoring systems to prove that the implementation is incorrect. When there is such proof, I correct the system. In environments that support rapid feedback & deployment, like the web, this works well. In environments that don't support rapid deployment, iteration, and where lives are at stake, not so well.
> But Go is extremely inflexible. It has a very primitive set of tools to do everything, clumsily. A language like Haskell, for example, lets you have Go-like coroutines and channels. But it also lets you program with software transactional memory, or use other parallelism constructs.
That sounds good to me.
> Also, the inability to specify invariants of your program isn't flexibility.
That mostly sounds good. I would want invariants to be optional, which sounds like is the case.