That is a lot! But that's also an expensive vape, with way more tech than cheap ones. Here you can get one for ~10 Euro, and they are NOT rechargeable or anything.
To answer the headline: No. Rust is not faster than C. C isn't faster than Rust either.
What is fast is writing code with zero abstractions or zero cost abstractions, and if you can't do that (because writing assembly sucks), get as close as possible.
Each layer you pile on adds abstraction. I've never had issues optimizing and profiling C code -- the tooling is excellent and the optimizations make sense. Get into Rust profiling and opimization and you're already in the weeds.
Want it fast? Turn off the runtime checks by calling unsafe code. From there, you can hope and pray like with most LLVM compiled languages.
If you want a stupid fast interpreter in C, you do computed goto, write a comment explaining why its not, in fact, cursed, and you're done. In C++, Rust, etc. you'll sit there examining the generated code to see if the heuristics detected something that ends up not generating effectively-computed-goto-code.
Not to mention panics, which are needed but also have branching overhead.
The only thing that is faster in Rust by default is probably math: You have so many more errors and warnings which avoid overflows, casts, etc. that you didn't mean to do. That makes a small difference.
I love Rust. If I want pure speed, I write unsafe Rust, not C. But it's not going to be as fast as trivial C code by default, because the tradeoffs fundamentally differ: Rust is safe by default, and C is efficient by default.
The article makes some of the same points but it doesn't read like the author has spent weeks in a profiler combing over machine code to optimize Rust code. Sadly I have, and I'm not getting that time back.
> it doesn't read like the author has spent weeks in a profiler combing over machine code to optimize Rust code
It is true that this blog post was not intended to be a comprehensive comparison of the ways in which Rust and C differ in performance. It was meant to be a higher level discussion on the nature of the question itself, using a few examples to try and draw out interesting aspects of that comparison.
> If you want a stupid fast interpreter in C, you do computed goto, write a comment explaining why its not, in fact, cursed, and you're done.
Bit of an aside, but these days it might be worth experimenting with tail call interpreters coupled with `musttail` annotations. CPython saw performance improvements over their computed goto interpreters with this method, for example [0].
Definitely the combination of callgrind (valgrind --tool=callgrind) and kcachegrind, or the combination of HotSpot and perf.
I have toyed with Intel's vTune, but I felt it was very hard to get running so its discouraging before you even start. That said, if you need a lot of info on cache etc., vTune is fantastic.
systemd units that are small, simple, and call into a single script are usually fantastic. There's no reason for these scripts to be part of another init system; but making as much of your code completely agnostic to the env it runs in sounds good regardless. I think that's the feeling you're feeling.
Lawsuits against medical professionals are difficult in many cases impossible for the average person to win. They are held less accountable compared to other professions.
> They are held less accountable compared to other professions.
I have no idea what other professions you’re talking about. Doctors are the only professionals where it’s common for multi million dollar judgements to be awarded against individuals. In may cases, judgements larger than their malpractice insurance limits.
Take a doctor working alone overnight in the ER. They are responsible for every single thing that happens. One of the 4 NPs that they are supposed to have time to supervise while they are stuck sedating a kid for ortho to work on makes a mistake—the doctor is the one that’s getting sued. A nurse misinterprets an order and gives too much of something, doctor is getting sued. Doesn’t matter if it’s their fault or not. Literally ever single one of the dozens of patients that comes in with a runny nose or a tummy ache, or a headache is their responsibility and could cost them their house. And there are far too many patients for them to actually supervise fully. They have to trust and delegate, but in practice they are still 100% on the hook for mistakes. For accepting this responsibility they might get $10 per NP patient that they supervise.
Healthcare professionals also occasionally face criminal prosecution for mistakes at a level that wouldn’t even be me a career in other professions.
> Lawsuits against medical professionals are difficult in many cases impossible for the average person to win
Malpractice attorneys operate on contingency, so they’re more accessible to the average person than most kinds of attorneys. It’s one of the many reasons healthcare is so expensive in the US.
It’s harder for a doctor to get fired for saying showing up late to work than it is for a cook at McDonald’s I guess, but compared to other professionals? I’ve seen software engineers regularly skip through companies leaving disasters in their wake for their entire careers. MBAs regularly destroy companies, lawyers and finance bros get away with murder, and police officers literally get away with murder.
The only profession that faces anywhere near the accountability that doctors do that I can think of might be civil engineers.
I feel like using an LLM for this is not a good fit, because it's super difficult to verify whether the knowledge it found is true or made up. LLMs are much better at coming to a conclusion when a human wouldn't be sure at all, and that seems really important here.
In this case, you verify whether the knowledge was made up by comparing the virtual waiter behaviour to the actual waiter. Having a strong test suite like that is actually the ideal scenario for agentic development.
(It still incredibly hard to pull off for real, because of complex stateful protocols and edge cases around timing and transfer sizes. Samba did take 12 years to develop, so even with LLM help you'd probably still be looking at several years.)
I guess the LLM doesn't need to verify whether what it found is true or made up, but rather just save the request and answer for later, so it can be reviewed by a developer and documented.
It works because if you want some information on React or say Python, or say Prolog. Whatever information ChatGPT generates is quickly verifiable, as you have to write code to test it.
Even better many times, it shows me new insights into doing things.
I haven't bought a book in a while, but Im reading a lot, like really a lot.
All the Americans here arguing why this is a good thing, how your system is so flawed, etc. remember that this will be accessible to people in countries with good, free healthcare.
This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees. People are going to choose asking ChatGPT instead of their local doctor who is not only cheaper(!!!) but also actually educated.
People saying that this is good because the US system specifically is so messed up and useless are missing that the US makes up ~5% of the world's population, yet you think that a medical tool made for the issues of 5% of the population will be AMAZING and LIFE SAVING for the other 95%, more than harmful? Get a grip.
Not to mention shitty doctors, which exist everywhere, likely using this instead of their own brains. Great work guys.
I suspect the rationale at OpenAI at the moment is "If we don't do it, someone else will!", which I last heard in an interview with someone who produces and sells fentanyl.
>> This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees.
Well then I suppose they'd have no need or motivation to use it, right?
Same here. Also no "TRY OUR AI NOW" button, no Copilot popups, no feeding all emails into LLM training, no ads (!!!) in the inbox(!!!). Just great value.
reply