Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So what exactly is the "much smaller and cleaner language struggling to get out" of Rust? If I'm understanding the post right, that language still has references, lifetimes, traits, enums, etc., because all of those features cohere; you can't remove just one and expect the rest of the language to still work. Once you grant all those features, your language isn't much smaller or cleaner than Rust; your language pretty much is Rust.

The last section gives two different hints as to what this "smaller and cleaner" language might be, but neither of them fully makes sense to me.

First, withoutboats's "Notes on a smaller Rust". That post and especially its sequel are great and I like them a lot, but the title is fairly misleading as to what they're getting at. The language that boats sketches out in those posts has significantly different design goals from Rust; in particular, it abandons the requirement of low-level programmer control over runtime behavior, and so is unsuitable for many use cases that Rust is used for. The idea, rather, is to explore what lessons Rust can offer for the design of a language with more "mainstream" requirements (i.e., one that can afford things like a tracing garbage collector, and wants to avoid Rust's downsides compared to other popular languages, like slow compile times and heavy syntactic salt). That language is not "struggling to get out" of Rust; Rust doesn't want to be it.

Second, "In a manner of speaking, that smaller Rust is the language I fell in love with when I first learned it in 2018. Rust is a lot bigger today, in many ways, and the smaller Rust is just a nostalgic rose-tinted memory." I've explained above why I don't think boats's proposed "smaller Rust" is anything like the real Rust was at any point in its history (at least after the very early days, once the designers figured out that they were targeting C++'s niche). In most fundamental respects, Rust hasn't changed that much since 2018, and a lot of the changes (like the new editions) are about making it more syntactically flexible and increasing the fraction of sensical programs that compile. That said, there are two big exceptions: async and const, which were much more minimal in 2018 and have since expanded to big complex meta-features with many interlocking parts that weren't part of the language's original core. If the claim is specifically that Rust was smaller and cleaner before async and const, then by all means, say that! But the post doesn't, leaving us to try to figure out what was meant.



> So what exactly is the "much smaller and cleaner language struggling to get out" of Rust? If I'm understanding the post right, that language still has references, lifetimes, traits, enums, etc., because all of those features cohere; you can't remove just one and expect the rest of the language to still work. Once you grant all those features, your language isn't much smaller or cleaner than Rust; your language pretty much is Rust.

I think there's an argument to be made that you could in fact make a simpler language than Rust while keeping the core concept. This variant of the language would remove:

- the Copy trait

- reborrowing

- deref coercion

- automagic insertion of `into_iter` in loops

- automatic call to drop at the end of scope (instead you'd have to call it by yourself or get an compiler error)

- trait bounds having an automatic `:Sized` bound by default.

- lifetime elision

- no “match ergonomics”

- and likely a few other pieces of “magic” that can be hard for beginner to grasp, but that's what I had at the top of my head.

This language would be much simpler, by having fewer concepts and a lot less “magic”, but it would also be much more cumbersome to use in a day to day basis, as all of the above are clever workaround designed to make common tasks as straightforward as possible, and I don't think anyone would prefer to use it than using Rust itself. It may be useful as an instructional tool when introducing Rust to students though.


I'm actually unconvinced you could do without `Copy`. It's both in the core of the language and essentially required for stuff like shared references to work correctly. Copying could be replaced by a keyword instead of happening implicitly, but that's different from removing the concept entirely from Rust.

The rest, sure, you could do without (reborrowing can't be emulated but I don't think it's strictly necessary to write real code). I'd add catchable exceptions and the use of traits instead of explicit modules as things that I think tremendously complicate the language semantics and are almost certainly not strictly necessary to achieve Rust's language goals.


You can already write a Rust program that never relies on Copy, by explicitly calling .clone() whenever you need to copy something. It's just that this would be insane.


`.clone()` is usually implemented in terms of `Copy`, but the real problem that I don't know how to solve without `Copy` is the use of references. Every time you call a function on a shared reference (including the one used by `clone`) you are implicitly copying the reference. It only works because of `Copy`. Without it, I think you would need unsafe code or something in order to call a shared method and retain access to the original reference, which would pretty much no longer be Rust since the vast majority of methods wouldn't be correctly expressible in the safe subset. There might be reborrowing style tricks you could use to get around this, but as you said the "core" of Rust shouldn't have reborrowing. Or maybe you could implement clone on references in unsafe code and then just explicitly take a reference to the reference every time you needed to duplicate it... There is also the linear type trick you can use to copy a value by pattern matching it destructively, explicitly enumerating all possibilities, and then generating a fresh product with each possibility listed twice, but that cannot implement copy for references.

In any case, I think it's just true that `Copy` is quite fundamental to Rust. IMO even if you could somehow turn all instances of it into syntactic sugar (which I think is not true, as the `Cell` example shows), the surface language you used would be sufficiently unlike Rust that it wouldn't really be Rust anymore.


Well no, some objects (notably Cell<>) require Copy, because clone(&self) takes a reference and can do arbitrary things with the Cell (including overwriting the data its ref points to via Cell::set())


I suppose you'd need a Copy-like auto trait to serve as a bound on impl Clone for Cell<T>. It wouldn't have to be magical the way Copy is, though.


Do those features actually cause difficulty for anyone other than compiler engineers, compared to not having them? I haven't personally seen, e.g., newbies stumbling over them; they're actually designed remarkably well to fade into the background and Just Work (i.e., you don't notice they're there, but you definitely would notice if they went away). Yes, there's something to be said for minimalism in language design, but Rust without those features still isn't very minimalistic, so dropping them would seem to bring about most of the costs of minimalism without the benefits.


> Do those concepts actually cause difficulty for anyone other than compiler engineers, compared to not having them? I haven't personally seen, e.g., newbies stumbling over them;

Pretty much all of the features in the list are things that either I have struggled with when learning Rust (the Copy trait and lifetime elision), or I've seen beginner struggle understanding (both new hires, or students at the place when I gave Rust lectures).


note that RAII in Rust is not as simple as calling drop() at the end of each lexical scope, because of drop flags.


> So what exactly is the "much smaller and cleaner language struggling to get out" of Rust?

Austral? https://austral-lang.org/features


This doesn't strike me as much simpler than Rust; it has most of the features listed above. I know a lot of people don't like RAII (which Rust has and Austral doesn't) because they want every function call to be visible at the call site, but replacing that with linear types, whatever their other virtues, does not make the language easier to learn; implicit destructor calls aren't hard to understand as a concept, once you've got your head around the notion of a value with a specifically bounded lifetime, whereas fighting the linearity checker seems likely to be an even greater speed bump for new users than fighting the borrow checker (which Austral still has).


you were reading very closely, well done. yes, that is my claim, Rust was smaller and cleaner before async and const. I was so indirect about it because many of my best friends work on those features and I wasn’t sure how to word it. fortunately Matklad has worded it very well on the other site: 2015 rust was a more complete language that cohered better, but the vision of Rust is not to cohere perfectly, it’s to be an industrial language that is useful even if it’s not beautiful.

https://lobste.rs/c/b8kevh


Yeah, in that case I think the link to boats's work obscures the point a bit.

I take what might be a slightly different read of matklad's point; I don't think Rust has much compromised its vision in terms of which broad features to support, but it has on a couple occasions chosen to ship something that wasn't perfect because being useful requires taking only a bounded amount of time to iterate on design.

So Rust 1.0 shipped without async, even though it was known to be needed for some of Rust's core use cases, because it was too far from being ready and it wouldn't do to wait forever. Once that decision was made, it had implications for how async could work; in particular, really doing it right requires linear types, but this wasn't appreciated when Rust 1.0 shipped and it's not a backwards-compatible change, so by 2018 it was off the table. The choice was, do async in a way that works with the existing design decisions, at the cost of some elegance, or don't do it at all. The former choice is not just more "industrial", I would argue that it coheres better, because waiting for multiple events at the same time is a core feature that a language for foundational software has to have, and the combinator-based approach that people were using in 2018 cohered poorly with the rest of the language (e.g., requiring unnecessary heap allocations). So this wasn't really a compromise to coherence.

(This also happened on a lesser scale when async/await first shipped—e.g., specific "async" syntax instead of a more general coroutine feature—because of eagerness to ship something that year. boats has claimed that this was a matter of existential survival for the language; I'm not sure I agree. But while async/await is a bit less conceptually pure than fully general coroutines, I don't believe that any of today's common complaints about async are downstream of the decision at that time to try to ship something quickly; there don't seem to have been a lot of obvious mistakes from then.)

(My understanding is that const has a similar story but I'm less familiar with the design space there, because people haven't exhaustively chronicled its history like they've done with async, perhaps because it's not as heatedly controversial.)


> in particular, really doing it right requires linear types, but this wasn't appreciated when Rust 1.0 shipped and it's not a backwards-compatible change, so by 2018 it was off the table.

It was pretty much off-the-table well before that, because a usable implementation of linear types requires being able to ensure the absence of panics. (A panic must unwind the stack, which amounts to automatically running drop implementations.) The two issues are quite closely linked, and hard to address in isolation.


I think an interesting component is that you might also want “semi linear types”: types which are purportedly linear but can be dropped as an unwinding backstop.

For instance if you’re dealing with database transactions you probably want to make it explicit whether you commit or rollback, but on panic you can likely allow the transaction to be cleaned up automatically.


Most Rust ORMs and query builders expose a transaction API that takes a closure and runs it inside the transaction, rolling back on unwind or (in most cases) if it's not explicitly committed. This is the most common idiom in Rust for dealing with situations where you want to pass extra data to or from a cleanup routine. Unfortunately, for the async use case in particular it happens to be unsound: https://tmandry.gitlab.io/blog/posts/2023-03-01-scoped-tasks...


I think a version of Rust in which catching panics is unsafe would be entirely justifiable and probably should have been more strongly considered.


This is one of many things that could have been done to solve the unwinding-through-linear-types problem, if it were still possible to make backwards-incompatible changes to the language.


Yes, but unlike most of the proposed solutions to this problem, this one was (1) seriously considered prior to the release of Rust 1.0, and (2) wouldn't have caused major changes to the way most people write Rust programs in practice. i.e. Rust without panic catching in safe code is still essentially Rust.


I think we are using different meanings of the term "cohere" and I am not sure how to reconcile them. I agree that Rust with async is a more useful language. I don't think being useful implies anything about how coherent a language is (I would point to bash and perl as examples of useful languages with very little coherence). "Coherence" to me means that all the features fit together tightly and are designed with each other in mind, and I don't think that's the case for async and const in Rust—simply because they aren't finished being designed.


Your point on coherence is similar to the perspective of an ex-C++ maintainer. This video came out a decade ago https://www.youtube.com/watch?v=KAWA1DuvCnQ&t=2530s and I feel his lesson went unheeded. It's relevant to the bigger and more dangerous concept of Conceptual Integrity by Fred Brooks.


Is it ever possible for removing a feature from a language to make it less coherent?


I think so, yes. If you remove any of the things in the “core” I mention in the post, the language hangs together much worse even though it’s smaller; enums without pattern matching is a simple example.

I’m not just saying that I want to go back to the “good old days”, I really do think that those parts of Rust were designed as a coherent whole, in the same way that Uiua is designed as a coherent whole.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: