Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, LLMs are a useful tool, and fast, but the point is they don't have human level intelligence, can't learn, and are not autonomous outside of an agent that will attempt to complete a narrow task (but with no ownership and guarantee of eventual success).

We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.

If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.



> the point is they don't have human level intelligence > If you want to ASSIGN a task to something/someone then you need a human or artificial human

Maybe you haven't experienced it but a lot of junior devs don't really display that much intelligence. Their operating input is a clean task list, and they take it and convert it into code. It's more like "code entry" ("data entry", but with code).

The person assigning tasks to them is doing the thinking. And they are still responsible for the final output, so if they find a computer better and cheaper at "code entry" than a human well then that's who they're assign it to. As you can see in this thread many are already doing this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: