strncpy() is for a different context, and it won't necessarily NUL terminate the destination [1]. It's not a compromise for strcpy(). strlcopy() will NUL terminate the destination, but it first appeared in 1998, nine years after C was standardized, so again, time machine etc. Should Ken Thompson & co. included strlcpy()? It's an argument one could make, but I'm sure they didn't see a need for it for what they were doing. Oversight? Not even thinking about it? Yes, but I don't blame them for failing to see the future. Should Intel not do speculative execution because it lead to Specter?
Now, assuming strlcpy() existed in the 70s---is that just as efficient as strcpy()? Should strcpy() have always included the size? Again, I think it comes down to the context of the times. Having worked on systems with 64K (or less! My first computer only had 16K of RAM), and with the experience of hindsight, the idea of implementing malloc() is overkill on such a system (and maybe even 128K systems), but I won't say people were stupid for doing so at the time.
[1] I think strcnpy() was there to handle the original Unix filesystem, where a directory was nothing more than a file of 16-byte entries---14 bytes for the filename, and two bytes for the indode.
I'm getting a little impatient with all this argument in circles. The point is not what someone should have done with hindsight. The point of this little sub-thread is that saying the author is always the expert is oxymoronic. Expertise is often multifaceted and distributed.
The larger point is also that it should be possible to get rid of things once hindsight has found them wanting.
> The point of this little sub-thread is that saying the author is always the expert is oxymoronic.
Writing the code makes me the expert in that code. Meaning how it works. Why it was built the way it was. How it was intended to be extended. Yeah, it might be shitty code and maybe someone with more domain or even just general expertise would be able to do better. But if I write the code, I’m still the expert in that specific code. Someone else can of course roll in and become and expert in that code, too. Expertise in my code is not a bounded resource.
If you took my comment to mean that writing the code makes me the expert on all possible implementations or choices for the solution, or that I am best qualified to decide what the best solution is, I must have communicated poorly.
(Also, this is pedantic, but that’s not what oxymoronic means.)
> If you took my comment to mean that writing the code makes me the expert on all possible implementations or choices for the solution, or that I am best qualified to decide what the best solution is, I must have communicated poorly.
No, I didn't think that. You're all good there.
> Writing the code makes me the expert in that code. Meaning how it works.
Go back and reread the bottom of https://news.ycombinator.com/item?id=30019146#30040731 that started this sub-thread. Does this definition of 'expert' really help in any way? Ok, so some author wrote some code, they're expert in that code. Big whoop. I don't want to use that code. I wrote a shitty simpler version of what you wrote, now I'm an expert, you're an expert. How is this helpful?
I think the only expertise that matters is domain expertise. Software should be trying to communicate domain expertise to anyone reading it. So that anyone who wants to can import it into their brain. On demand.
The “expert in the code” matters in terms of siloing. That was the context where I mentioned it. The engineers who work in an area all the time and know the code best are typically best equipped to make the next changes there. Hence a silo. Maybe I completely misunderstood your comment about late binding experts?
But stepping back for a moment, I’m increasingly confused about what your thesis here is. You’ve clarified that it’s not actually about abstraction. It’s also apparently not about clearly factoring code (though maybe you think that contributes to problems?). What is the actual thesis?
Engineers should be experts in the entire code base?
Code is in general too hard to understand?
Something else?
It feels like we’re discussing mostly tangents and I’ve missed what you intended to say. If I understood your message, I might have a better mental model for the utility of copying code.
Here's another attempt, a day and a half later. My thesis, if you will.
Before we invented software, the world has gradually refined the ability to depend on lots of people for services. When we buy a shoe we rely on maybe 10,000 distinct groups of people. When we write software we bring the same social model of organization, but all the vulnerabilities and data breaches we keep having show that it doesn't work. https://flak.tedunangst.com/post/features-are-faults-redux enumerates (in better words than mine) several examples where services made subtly different assumptions, and the disconnects make their combination insecure.
Since we don't know how to combine services as reliably in software as in non-software, we need to be much more selective in the services we depend on. A computer or app just can't depend on 10,000 distinct groups of people with any hope of being secure.
Yeah that's a failure on my part. Like I said, I've spent ten years trying to articulate it and spent a few hours in conversation with you, and I still can't seem to get my actual thesis across. I don't think another attempt right now will help where all else before it has failed :) How about we pause for a bit, and we can pick it up again in a few hours. Perhaps even over email or something, because this back and forth is likely not helping anyone else. My email is in my profile.