But what if mutation is intended? How to pass a mutable reference into a function, so that it can change the underlying value and the caller can observe these changes? What about concurrent mutable containers?
> How to pass a mutable reference into a function, so that it can change the underlying value and the caller can observe these changes?
Just modify the value inside the function and return it, then assign back. This is what the |= syntax is designed for. It's a bit more verbose than passing mutable references to functions but it's actually functionally equivalent.
Herd has some optimisations so that in many cases this won't even require any copies.
> What about concurrent mutable containers?
I've considered adding these, but right now they don't exist in Herd.
A good decision. I tried to use it once and realized that it can't even work with UTF-8 properly. It's a mystery for me how such flawed design was standardized at all.
There are also some things in C that do not work or work differently in C++, such as (void*), empty structures (which in C++ are not really empty), etc; and there is also such C++ stuff such as name mangling, the C++ standard library, etc, even if those things are not a part of your program, which is another reason why you might prefer C.
It is like those folks that rather write JSDoc comments than using a linter like Typescript, because reasons.
Given the C++ adoption on 1990's commercial software and major consumer operating systems (Apple, IBM, Microsoft, Be), I bet if the FSF with their coding guidelines had not advocated for C, the adoption would not taken off beyond those days.
"Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. So please write in C."
> C23 + <compiler C extensions> is hardly simpler as people advocate.
Well, certainly simpler than C++, at any rate.
I mean, just knowing the assignment rules in C++ is worthy of an entire book on its own. Understandably, the single rule of "assignment is a bitwise copy of the source variable into the destination variable" is inflexible, but at least the person reading the local code can, just from the current scope, determine whether some assignment is a bug or not!
In many ways, C++ requires global context when reading any local scope: will the correct destructor get called? Can this variable be used as an argument to a function (a lack of a copy constructor results in the bitwise copy for on stack, with the destructor for that instance running twice - once in the stack and again when the scope ends)? Is this being passed by reference (i.e. it might be modified by the function we are calling) or by value (i.e. we don't need to worry about whether `bar` has been changed after a call to `foo(bar)`).
Many programmers don't like holding lots of global scope in their head when working in some local scope. In C, all those examples above are clear in the local scope.
All programmers who prefer C over C++ have already tried C++ in large and non-trivial projects before walking away. I doubt that the reverse is true.
Where do you think the first generations from C++ programmers come from?
There is this urban myth C is simple, from folks that never read either ISO C manual, can't read legalese, never spent much time browsing the compiler reference manual.
Mostly learnt K&R C, assume the world is simple, until the code gets ported into another platform or compiler.
Yet in such a simple language, I keep waiting to meet the magical developer that never wrote memory corruption errors with pointer arithmetic, string and memory library functions.
> There is this urban myth C is simple, from folks that never read either ISO C manual, can't read legalese, never spent much time browsing the compiler reference manual.
And yet you know from previous discussion with folks like Uecker and myself have done all those things, and still walked away from C++.
In my case, I stepped back even after having a decade of work experience in it. Anything needing more abstraction than C, C++ is not going to be a good fit anyway (there's better languages).
> Yet in such a simple language, I keep waiting to meet the magical developer that never wrote memory corruption errors with pointer arithmetic, string and memory library functions.
Who made that claim? This sounds like a strawman - "If you use C you'll never make this class of errors", which no one said in this conversation.
In any case, the point is even more true of C++ - I have yet to meet this magical C++ programmer that never hits the few dozens of footguns it has that C doesn't.
> Internet is full of people asserting CVEs in C are only caused by not skilled enough devs.
Sure, but those people are not here, and usually aren't on HN anyway.
The internet is also full of people asserting that CVEs in C++ are only caused by not skilled enough devs, but I consider those people irrelevant too.
The reasons for rejecting C++ in this forum have been repeated often enough that you should have seen them by now: C++ has major systemic problems that don't exist in many other languages, including C.
It should be no surprise to you, at this point, that people choose almost anything over C++. The fact that "anything" also includes "C" is mostly incidental.
No one is asserting that they reject C++ because C is better, they typically reject it for concrete reasons, like the ones I pointed out upthread.
> Might be, then again C23 isn't K&R C that many still learn from.
I agree with this, but then again, not many people are learning C now anyway. It will die away from natural attrition anyway, is my point.
The K&R C does have a few advantages, because the compilers at the time were not so aggressive in optimisation, and will consistently emit code that (for example) performed a NULL dereference (or other UB), ensuring things like consistently crashing instead of silently losing data/doing the wrong thing.
Is it really necessary to have a lander to perform radio-astronomic observations in moon's shadow? Isn't it easier to have an orbiting spacecraft instead and perform observation while it's orbiting behind the moon?
It's not necessary, but is significantly more radio-quiet than a lunar orbit. And secondly, though unfortunately not something we could really exploit this time, the stable temperatures of the lunar night greatly help with calibration for sensitive measurements like the 21cm Dark Ages signal
Isn’t the benefit here that you don’t have to deal with things such as significant Doppler shift, or having to maintain a supply of propellant for orbit-keeping?
My immediate thought was why not put it in the Earth-Moon L2 Lagrange point, like the James Webb Space Telescope, where it would be permanently shaded from RF from both the Earth and the Sun. But...
1. James Webb is in the Earth-Sun L2 point, where it is largely (though not completely) shaded from the Sun. A radio telescope at Earth-Sun L2 wouldn't be shaded from Earth RF. [edit: JWST is in a halo orbit which keeps it out of the shadow]
2. The Earth-Moon L2 point is shaded from the Earth, but not the Sun. So no benefit compared to the far-side lunar surface.
3. According to TFA, being on the lunar surface gets the telescope out of the solar wind, which is noisy at the low radio frequencies being observed.
There is perhaps some extra opportunity in a 10-14 day solid observation window, but I don't see why a satellite version couldn't still work in smaller windows.
Another reason could be testing for building a much large radio antenna on the moon's surface in the future which is mentioned to farther down in the article. The moon itself and it's dust has electromagnetic effects that might effect measurements and learning about them now could help future planning.
You'd build an array (see e.g. VLA mentioned in the article or SKA), and it is much easier to combine the data from an array if everything isn't flying around and so there are varying distances between the antennae.
Not for radio telescopes, but how is the current state of optical interference? Would it help if we didn't have to use adaptive optics to compensate for atmospheric turbulence (and have subtly different images at the different telescopes)?
I have used this article as inspiration for my own software renderer. The portal-based clipping algorithm described in it is great, but sometimes too slow for very complex scenes.
reply