Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok but what happens when you reach the point where the LLM makes fewer mistakes than the human? Where it’s better at spotting potential memory corruption and security holes?

It doesn’t feel like we’re very far from that point.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: