At first I thought this was a typo, but actually I fully agree with this. If we use LLMs (in their current state) responsibly we won’t see much benefit, because the weight of that responsibility is roughly equivalent to the cost of doing the task without any assistance.
That’s what AI fanboys say every single time I make this point. But “it’s the same for humans” argument only works if you are referring to little children.
Indeed, my airline pilot brother once told me that a carefully supervised 7 year old could fly an airliner safely, as long as there was no in-flight emergency.
And indeed hiring children, who are not accountable for their behavior, does create a supervision problem that can easily exceed the value you may get, for many kinds of work.
I can’t trust AI the way I can trust qualified adults.
Well, you employ different adults than I do, then. Every person I know (including me) can be either thorough, or fast, as the post says, and there's no way to get both.
That’s what it boils down to.