Hacker Newsnew | past | comments | ask | show | jobs | submit | SabrinaJewson's commentslogin

You know that “learning disability” isn’t a synonym for “stupid”, right? We neither call people who are less academically able “disabled”, nor are disabled people necessarily less able to work academically (apart from some more debilitating mental disorders, which would be a disability). In fact, it’s quite the opposite: the word “disability” exists _precisely_ to distinguish “intelligence” – which is what the university is selecting for – and other characteristics, so in theory intelligence and disability are entirely orthogonal (apart from the exception I mentioned).

Of course, understanding what disability actually is requires considering each learning disability separately, which is something this article unfortunately fails to do. We can do this though:

- Anxiety and depression: I see no reason why this should decrease somebody’s intelligence, so the fact that there are elevated rates of such people at top universities does not seem odd. Since these are treatable conditions, they won’t necessarily affect the ability for a student to become an effective researcher.

- ADHD: This condition is marked by a lack of ability to focus, which is a property unrelated to intelligence. Some very famous mathematicians like Paul Erdős likely had ADHD, demonstrating that it’s not necessarily true this condition makes one a worse researcher.

- Autism: Does not necessarily reduce intelligence. We can look at professional mathematicians and see that a lot of them are autistic.

- Chronic pain, migraines, etc: Unrelated to intelligence. It’s possible this will decrease one’s ability to be a researcher, but if one is able to complete University at all, it’s likely not that severe.

I mean, I could go on, and of course there will be a couple of counterexample. However, it is still the case that generally speaking, “learning disability” and “stupid” are different things, and therefore there is no reason to expect that there would be lower rates of learning disabilities among those who are highly academically skilled.


I have been under the impression that "learning disability" means that you are less able to learn than your peers. Whether that deficit is on account of intelligence, health, etc., is a different subject.

According to your definition, you can be far superior to your peers at learning and still be learning disabled. If you are looking for stupid people, you have found one, because I don't understand that.

Because of all of the ways that students can be disadvantaged at learning, every student needs accommodations. There are no students who can't benefit from a highly responsive learning environment. Being able to benefit from that does not make any student learning disabled, just different, and they are all different.

But if you're just different, and not disabled, you lose victim cred, preferences and funding.


You can claim that “learning disability” should mean whatever, but this does not change the fact that medical experts define “learning disability” such that they do not inherently impede intelligence: https://ehvi.org/learning-vs-intellectual-disabilities/. This isn’t my definition, it’s the definition used by medical experts. A quote from that article:

> Learning disabilities don’t affect intelligence and are different from intellectual disabilities. People with LDs have specific issues with learning but have an average or above-average IQ (intelligence quotient).

I acknowledge that I was including autism as a learning disability, but I see this isn’t the case. Still, however, I hope you would acknowledge that autistic people are not inherently less intelligent than others, and neither are people with depression nor anxiety.


I don’t know why you’re so angry at this statement, because it’s factually true. Do you truly believe that the proportion of families who stigmatize mental health care is negligible?


100%. The school and the Internet are the two places children can encounter opinions different from their parents’ for the first time. With an increase in homeschooling and recent pushes to ban social media for children, it’s clear that critical thinking is going to suffer most. I still have not met someone who was homeschooled who was remotely thankful for it.

Honestly, support for these policies that benefit, more than anyone else, abusive parents, makes me suspicious of people’s motives.


Hello! My parents were early homeschooling pioneers, and I never had any conventional schooling. I think it suited me very well, and I'm glad my parents did it. I am solidly middle-aged now, but nothing I've seen or heard in all these years has given me the impression that I missed out on any experience worth having.


In type theory, all singleton types are isomorphic and have no useful distinguishing characteristics (indeed, this is true of all types of the same cardinality – and even then, comparing cardinalities is always undecidable and thus irrelevant at runtime). So your Nine type doesn’t really make sense, because you may as well just write Unit. In general, there is no amount of introspection into the “internal structure” of a type offered; even though parametricity does not hold in general (unless you postulate anticlassical axioms), all your functions that can run at runtime are required to be parametric.


Being isomorphic is not the same as being identical, or substitutable for one another. Type theory generally distinguishes between isomorphism and definitional equality and only the latter allows literal substitution. So a Nine type with a single constructor is indeed isomorphic to Unit but it's not the same type, it carries different syntactic and semantic meaning and the type system preserves that.

Some other false claims are that type theory does not distinguish types of the same cardinality. Type theory is usually intensional not extensional so two types with the same number of inhabitants can have wildly different structures and this structure can be used in pattern matching and type inference. Cardinality is a set-theoretic notion but most type theories are constructive and syntactic, not purely set-theoretic.

Also parametricity is a property of polymorphic functions, not of all functions in general. It's true that polymorphic code can't depend on the specific structure of its type argument but most type theories don't enforce full parametricity at runtime. Many practical type systems (like Haskell with type classes) break it with ad-hoc polymorphism or runtime types.


This comment contains a lot of false information. I’m first going to point out that there is a model of Lean’s type theory called the cardinality model, in which all types of equal cardinality are modelled as the same set. This is why I say the types have no useful distinguishing characteristics: it is consistent to add the axiom `Nine = Unit` to the type theory. For the same reason, it is consistent to add `ℕ = ℤ` as an axiom.

> So a Nine type with a single constructor is indeed isomorphic to Unit but it's not the same type, it carries different syntactic and semantic meaning and the type system preserves that.

It carries different syntax but the semantics are the exact same.

> Type theory is usually intensional not extensional so two types with the same number of inhabitants can have wildly different structures

It is true that type theory is usually intensional. It is also true that two types equal in cardinality can be constructed in multiple different ways, but this has nothing to do with intensionality verses extensionality – I wouldn’t even know how to explain why because it is just a category error – and furthermore just because they are constructed differently does not mean the types are actually different (because of the cardinality model).

> Cardinality is a set-theoretic notion but most type theories are constructive and syntactic, not purely set-theoretic.

I don’t know what you mean by this. Set theory can be constructive just as well as type theory can, and cardinality is a fully constructive notion. Set theory doesn’t have syntax per se but that’s just because the syntax is part of logic, which set theory is built on.

> most type theories don't enforce full parametricity at runtime

What is “most”? As far as I know Lean does, Coq does, and Agda does. So what else is there? Haskell isn’t a dependent type theory, so it’s irrelevant here.

---

Geniune question: Where are you sourcing your information from about type theory? Is it coming from an LLM or similar? Because I have not seen this much confusion and word salad condensed into a single comment before.


I will let Maxatar responds if he wants to but I will note that his response makes much more sense to me than yours as someone who uses traditional programming language and used to do a little math a couple decades ago.

With yours, it seems that we could even equate string to int.

How can a subtype of the integer type, defined extensionally, be equal to Unit? That really doesn't make any sense to me.

> it is consistent to add `ℕ = ℤ` as an axiom Do you have a link to this? I am unaware of this. Not saying you're wrong but I would like to explore this. Seems very strange to me.

As he explained, an isomorphism does not mean equality for all I know. cf. Cantor. How would anything typecheck otherwise? In denotational semantics, this is not how it works. You could look into semantic subtyping with set theoretic types for instance.

type Nine int = {9} defines a subtype of int called Nine that refers to the value 9. All variables declared of that type are initialized as containing int(9). They are equal to Nine. If you erased everything and replaced by Unit {} it would not work. This is a whole other type/value.

How would one be able to implement runtime reflection then?

I do understand that his response to you was a bit abrupt. Not sure he was factually wrong about the technical side though. Your knee-jerk response makes sense even if it is too bad it risks making the conversation less interesting.

Usually types are defined intensionally, e.g. by name. Not by listing a set of members (extensional encoding) in their set-theoretic semantic. So maybe you have not encountered such treatment in the languages you are used to?

edit: the only way I can understand your answer is if you are only considering the Universe of types as a standalone from the Universe of values. In that universe, we only deal with types and types structured as composites in what you are familiar of perhaps? Maybe then it is just that this Universe as formalized is insufficiently constrained/ underspecified/ over-abstracted?

It shouldn't be too difficult to define specific extensional types on top, of which singleton types would not have their extensional definitions erased.


> Many practical type systems (like Haskell with type classes) break it with ad-hoc polymorphism or runtime types.

Haskell does not break parametricity. Any presence of ad-hoc polymorphism (via type classes) or runtime types (via something like Typeable, itself a type class) is reflected in the type signature and thus completely preserves parametricity.


I think what they meant is that it is not purely parametric polymorphism. Not that parametricity is broken.


Hmm, maybe

> most type theories don't enforce full parametricity at runtime

means "sometimes types can appear at run time"? If so it's true, but it's not what I understand by "parametricity".


Not sure, either : Parametric polymorphism is compile time. Adding runtime behaviors is a sort of escape hatch.

Which is fair although necessary. It's not so much a breakage as it is a necessity, since non parametric code may rely on this too.

Or maybe it is about constrained parametricity. In any case this doesn't seem a big issue.


How does it become Unit if it is an integer of value 9? Why would cardinalities be undecidable if they are encoded discretely in the type?

For instance, type Nine int = {9} would not be represented as 9. It would probably be a fat pointer. It is not just an int, it would not even have the same operations (9 + 9 is 18 which is an int but is not of type Nine, but that's fine, a fat pointer does not need to share the same set of operations as the value it wraps).

It could be seen as a refinement of int? I am not sure that it can truly be isomorphic? My suspicion was that it would only be somewhat isomorphic at compile time, for type checking, and if there is a mechanism for auto unwrapping of the value?


There's only one possible value of type Nine; there's only one possible value of type Unit. They're isomorphic: there's a pair of functions to convert from Nine to Unit and from Unit to Nine whose compositions are identities. Both functions are just constants that discard their argument un-inspected. "nineToUnit _ = unit" and "unitToNine _ = {9}".

You've made up your language and syntax for "type Nine int = {9}" so the rules of how it works are up to you. You're sort of talking about it like it's a refinement type, which is a type plus a predicate over values of the type. Refinement types aren't quite dependent types: they're sort of like a dependent pair where the second term is of kind Proposition, not Type; your type in Liquid Haskell would look something like 'type Nine = Int<\n -> n == 9>'.

Your type Nine carries no information, so the most reasonable runtime representation is no representation at all. Any use of the Nine boils down to pattern matching on it, and a pattern match on a Nine only has one possible branch, so you can ignore the Nine term altogether.


Why doesn't it seem exactly true?

It is an integer. It is in the definition. And any value should be equal to nine. By construction Nine could have been given the same representation as int, at first. Except this is not enough to express the refinement/proposition.

One could represent it as a fat pointer with a space allocated to the set of propositions/predicates to check the value of.

That allows to check for equality for instance.

That information would not be discarded.

This is basically a subtype of int.

As such, it is both a dependent type and a refinement type. While it is true that not all refinement types are dependent types, because of cardinality.

I think Maxatar response in the same thread puts it in words that are possibly closer to the art.


When does the predicate get tested?

Also, it's not a dependent type. The type does not depend on the value. The value depends on the type.


The predicate gets tested every time we do type checking? It is part of the type identity. And it is a dependent type. Just like an array type is a dependent type, the actual array type depending on the length value argument.

edit: I think I am starting to understand. In the implementations that are currently existing, Singleton types may be abstracted. My point is exactly to unabstract them so that the value is part of their identity.

And only then we can deal with only types i.e. everything from the same Universe.


> The predicate gets tested every time we do type checking? It is part of the type identity.

When does type checking happen?

I think it happens at compile time, which means the predicate is not used for anything at all at run time.

> edit: I think I am starting to understand. In the implementations that are currently existing, Singleton types may be abstracted. My point is exactly to unabstract them so that the value is part of their identity.

I am not sure what you mean by "to unabstract them so that the value is part of their identity", sorry. Could you please explain it for me?

> And only then we can deal with only types i.e. everything from the same Universe.

If you mean avoiding the hierarchy of universes that many dependently typed languages have, the reason they have those is that treating all types as being in the same universe leads to a paradox ("the type of all types" contains itself and that gets weird - not impossible, just weird).


> the predicate is not used for anything at all at run time.

It is used for its value when declaring a new variable of a given type at runtime too. It has to be treated as a special predicate. The allocator needs to be aware of this. Runtume introspection needs to be aware of this. Parametric type instantiation also needs to know of this since it is used to create dependent types.

The point is that in the universe of types that seems to be built in dependent types, singleton Types are just Types decorrelated from their set of values. So they become indistinguishable from each other, besides their name. Or so it seems that the explanation is. It is abstracted from their value. The proposal was to keep the set definition attached. What I call unabstract them.

The point is exactly to avoid mixing up universes which can lead to paradoxes. Instead of dealing with types as some sorts of values with functions over types, mixed up with functions over values which are also types and then functions of types and values to make some sort of dependent types, we keep the algebra of types about types. We just bridge values into singleton types. Also, it should allow an implementation that relies mostly on normal constrained parametricity and those singleton types. The point is that mixing values and types (as values) would exactly lead to this type of all types issue.

But again, I am not familiar enough with the dependent type implementations to know exactly what treatment they have of the issue.


Well, Ladybird appears to be getting a headstart on having detractors.[0]

[0]: https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and...


That person, while clearly smart, is a bit of a professional detractor. So don’t mind him.


What about DEI makes it an “ideological” movement as opposed to other movements who are presumably not ideological?

And I’m not sure what “most people”, is supposed to mean; you do realize you’re talking about 49% – that is, under half, so definitively not “most” – of the US of A’s population?[0]

[0]: https://www.nbcnews.com/politics/politics-news/poll-american...


Maybe the demonizing everybody who does not agree with it or the agenda driven financial investment by mega corporations with ESG scores invented by the WEF folks


Ferrocene is a specification but it’s not a formal specification. [Minirust](https://github.com/minirust/minirust) is the closest thing we have to a formal spec but it’s very much a work-in-progress.


It's a good enough spec to let rustc work with safety critical software, so while something like minirust is great, it's not necessary, just something that's nice to have.


Isn’t the Ada spec also not a formal spec?


The most popular language with a formal specification is Standard ML.

I guess this is terminology confusion on behalf of Surac, who probably just wants a specification that is independent of the rustc implementation.


“Natively” is important here because it’s actually relatively easy to get it working with a package: https://github.com/ntjess/wrap-it


or meander, a recently released, more powerful package https://typst.app/universe/package/meander


I encountered some weirdness with wrap-it when my content is a bullet list


English alternatives like “The staff enjoyed it later” or “The staff had the pleasure of eating it later” I would expect come across more euphemistic than normal to the average English-speaking viewer. So the question is whether the original was intentionally trying to come across euphemistic, or whether the original was using formal/polite language solely because of its position as being on TV.


> Addressing your issue directly, the Axiom of Choice is actively debated:

The axiom of choice is not required to prove Cantor’s theorem, that any set has strictly smaller cardinality than its powerset.

Actually, I can recount the proof here: Suppose there is an injection f: Powerset(A) ↪ A from the powerset of a set A to the set A. Now consider the set S = {x ∈ A | ∃ s ⊆ A, f(s) = x and x ∉ s}, i.e. the subset of A that is both mapped to by f and not included in the set that maps to it. We know that f(S) ∉ S: suppose f(S) ∈ S, then we would have existence of an s ⊆ A such that f(s) = f(S) and f(S) ∉ s; by injectivity, of course s = S and therefore f(S) ∉ S, which contradicts our premise. However, we can now easily prove that there exists an s ⊆ A satisfying f(s) = f(S) and f(S) ∉ s (of course, by setting s = S), thereby showing that f(S) ∈ S, a contradiction.


Perhaps this is an ignorant question, but wouldn't you need AC to select the s ⊆ A whose existence the contradiction depends on? A constructive proof, at least the ones I'm trying to build in my head, stumbles when needing to produce that s to use in the following arguments.


No, because you only have to choose _one_ s for the proof to work, and a finite number of choices is valid in intuitionistic and constructive mathematics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: