Cigarette makers were a dying cry of the old aristocracy. Silicon Valley is the rallying cry of the new aristocracy.
While I don’t quite believe they’ll achieve their Feudal dreams in the near-medium future. I do expect the US to transition to a much more explicitly an oligarchic republic as a large, with the pretense of “Government of the people, by the people, for the people” is largely pushed to the side.
Only solution seems to be to drop out of society to whatever degree possible.
The government and massive corporations being in bed with each other is nothing new. Different breed same species. Except tech execs think they're a lot smarter than they are.
It's nothing new, as it is essentially the only logical outcome of capitalism. It's not an aberration, it's an intended feature. Capital is power, and law and government is how that power is expressed and enacted over those without capital.
It's actually the logical outcome of any system with a consolidated monopoly on political power (the government). Blaming capitalism is ridiculous because alternative systems suffer from the exact same issue.
Sort of. Capitalism cannot exist without a monopoly of coercive state power backing it. So it makes sense to criticize it when that's what's actually happening. Other systems can work without coercive state power, are in fact intended to, and result in more freedom for the members of the resulting society, so I agree with your take on government generally.
Free trade and private property rights can exist without a monopoly on political power, but as with stuff like this I don't really know what's meant by "capitalism".
That being said I don't think at the present moment it's possible to have a society without some form of government, so then the question becomes "what do we do about it", and I think the answer is to limit the scope of political intervention and power as much as possible.
Cashless payments, always connected software and devices, and required app use for basic services like power, water, and heat as well as extreme data collection as it exists today makes dropping out of society more difficult than ever.
While his crimes were atrocious, Ted Kaczynski might be right in some ways. The industrial and technological revolutios have improved life dramatically for n many humans and we live in a tube of astonishing abundance, but at what cost?
the appetite of the rich always grows. the fast technological growth created more wealth than they could consume. once that runs out they'll take back what accidentally "trickled down"
most innovation since 2012 seems to be not in the technology, but the financial sector. not ways to create value, but to squeeze more from the same thing.
Maps, news, stocks are all installed by default and supported by ads. Opting out eliminates personalization. One needs to install an app like lulu to block background calls even with personalization turned off. This started with the twitter integration many years again and while social is no longer tightly integrated, the philosophy around user fingerprinting “while not being tied to your identity” is still very much alive and well in a default macOS install.
That’s all ad-supported shovelware. I use a Mac all day long for work. I never use Maps, News, or Stocks. I also don’t use Weather, Music, Mail, Pages, Numbers, FaceTime, Keynotes, Contacts, Reminders, Photo Booth, Books, Dictionary, Stickies, Voice Memos, AppleTV, GarageBand, or Image Playground.
I do use Preview quite a bit. I sometimes use TextEdit, Terminal, or Safari, but I more often use Vim, iTerm2, Firefox, DuckDuckGo Browser, or sometimes Chrome.
It helps not to judge a whole OS by three free apps included with it. Microsoft meanwhile puts ads in the main menu and in the task bar. I wouldn’t be surprised if the Windows desktop wallpaper on the Home editions become ads.
The PSP SotN was different enough than the NA PSX release, including a new Maria mode (different than the Saturn one), and a new translation (technically better, but retranslates some iconic lines).
You get memory safety. That's about it for Security. Quality is in the eye of the beholder. maybe it's quality code? Maybe it's junk, who knows. Rust isn't magic that forces code to be quality code or anything. That said, the code in the Redox system is generally good, so it's probably fine, but that's not because it's written in Rust, it's because of the developers.
Any GC'd language(Python, JS, Ruby, etc) gives you the same memory safety guarantees that Rust gives you. Of course GC'd languages tend to go a fair bit slower(sometimes very drastically slower) than Rust. So really the memory safety bit of Rust is that the memory safety happens at develop and compile time, instead of at runtime, like a GC language does. So you get the speed of other "systems" languages, like C, C++, etc AND memory safety.
While I generally agree, the latest Android report suggests that rust developers get fewer reverts and code reviews are faster. This could mean that better developers tend to adopt rust or it could mean that rust really is a language where quality is easier to attain.
There’s some reason to believe that given how easy it is to integrate testing into the rust code base and in general the trait and class system is also a bit higher quality and it encourage better isolation and things like const by default and not allowing API that could misuse the data structure in some unsafe way. And it has a rich ecosystem that’s easy to integrate 3p dependencies so that you’re not solving the same problem repeatedly like you tend to do in c++
So there is some reason to believe Rust does actually encourage slightly higher quality code out of developers.
Or there are "reverts and code reviews are faster" because no one wants to actually read through the perl-level line-noise type annotations, and just lgtm.
> Or there are "reverts and code reviews are faster"
This seems like a slight misreading of the comment you're responding to. The claim is not that reverts are "faster", whatever that would mean; the claim is that the revert rate is lower.
Also, how would skimping on reviews lead to a lower revert rate? If anything, I'd imagine the normal assumption would be precisely the opposite - that skimping on reviews should lead to a higher revert rate, which is contrary to what the Android team found.
What type annotations? In Rust almost all the types are inferred outside of function and struct declarations. In terms of type verbosity (in the code) it is roughly on the same level as TypeScript and Python.
> Rust isn't magic that forces code to be quality code or anything.
It is not, but the language and ecosystem are generally very well-designed and encourage you to "do the right thing," as it were. Many of the APIs you'd use in your day-to-day are designed to make it much harder to hold them wrong. On balance, outside of Haskell, it's probably the implementation language that fills me with the most confidence for any given particular piece of software.
Most of the performance penalty for the languages you mentioned is because they're dynamically typed and interpreted. The GC is a much smaller slice of the performance pie.
In native-compiled languages (Nim, D, etc), the penalty for GC can be astoundingly low. With a reference counting GC, you're essentially emulating "perfect" use of C++ unique_ptr. Nim and D are very much performance-competitive with C++ in more data-oriented scenarios that don't have hard real-time constraints, and that's with D having a stop-the-world mark-and-sweep GC.
The issue then becomes compatibility with other binary interfaces, especially C and C++ libraries.
> With a reference counting GC, you're essentially emulating "perfect" use of C++ unique_ptr.
Did you mean shared_ptr? With unique_ptr there's no reference-counting overhead at all. When the reference count is atomic (as it must be in the general case), it can have a significant and measurable impact on performance.
You might be right. Though with the way I design software, I'm rarely passing managed objects via moves to unrelated scopes. So usually the scope that calls the destructor is my original initializing scope. It's a very functional, pyramidal program style.
Definitely true! Probably add Swift to that list as well. Apple has been pushing to use Swift in WebKit in addition to C++.
Actually Nim2 and Swift both use automatic reference counting which is very similar to using C++’s SharedPointer or Rusts RC/ARC. If I couldn’t use Nim I’d go for Swift probably. Rust gives me a headache mostly. However Nim is fully open source and independent.
Though Nim2 does default to RC + Cycle collector memory management mode. You can turn off the cycle collector with mm:arc or atomic reference counting with mm:atomicArc. Perfect for most system applications or embedded!
IMHO, most large Rust project will likely use RC or ARC types or use lots of clone calls. So performance wise it’s not gonna be too different than Nim or Swift or even D really.
> IMHO, most large Rust project will likely use RC or ARC types or use lots of clone calls. So performance wise it’s not gonna be too different than Nim or Swift or even D really.
I do not think so. My personal experience is that you can go far in Rust without cloning/Rc/Arc while not opting for unsafe. It is good to have it as default and use Rc/Arc only when (and especially where) needed.
Being curious I ran some basic grepping and wc on the Ion Shell project. It has about 2.19% of function declarations that use Rc or Arc in the definition. That is pretty low.
Naive grepping for `&` assuming most are borrows seems (excluding &&) to be 1135 lines. Clone occurs in 62 lines for a ratio of 5.4%. Though including RC and ARC with clones and that you get about 10.30% vs `&` or borrows borrows. That's assuming a rough surrogate Rc/Arc lines to usages of Rc/Arc's.
For context doing a grep for `ref object` vs `object` in my companies Nim project and its deps gives a rate of 2.92% ref objects vs value objects. Nim will use pointers to value objects in many functions. Actually seems much lower than I'd would've guessed.
Overall 2.19% of Rust funcs in Ion using Rc/Arc vs 2.92% of my Nim project types using refs vs object types. So not unreasonable to hold they have similar usage of reference counting vs value types.
>You get memory safety. That's about it for Security
Not true, you get one of the strongest and most expressive type systems out there.
One example are the mutability guarantees which are stronger than in any other language. In C++ with const you say "I'm not going to modify this". In Rust with &mut you're saying "Nobody else is going to modify this." This is 100x more powerful, because you can guarantee nobody messes with the values you borrow. That's one very common issue in efficient C++ code that is unfixable.
Sum types (enum with value) enable designing with types in a way otherwise only doable in other ML-based languages. Derive macros make it easy to use as well since you can skip the boilerplate.
Integers of different sizes need explicit casting, another common source of bugs.
Macros are evaluated as part of the AST. A lot safer than text substitution.
Further the borrow checker helps with enabling compile time checking of concurrency.
The list goes on, but nobody that has tried Rust properly can say that it only helps prevent memory safety issues. If that's your view, you just showed that you didn't even try.
It's true for new projects.
For rewrites (such as a shell) it can mean a lot of regressions.
The rust-replacements for coreutils are a good negative example.
The new programs do not reach feature-parity, added regressions, and in some cases also had security vulnerabilities.
So for battle-proved software I wouldn't say so per-se (if your goal is to replace them).
Nonetheless, if you add truly new rust-code to a new or existing codebase when it's without much of hassle with the interop it should hold.
Not necessarily. "Quality" and "Security" can be tricky subjects when it comes to a shell. Rust itself is pretty great, but its HN community is made of cringe and zealotry - don't let them dissuade you from trying the language :P