Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

while at the same time talking nonstop about how "AI alignment" and "AI safety" are extremely important


Anthropic is the worst about this. Every product release they have is like "Here's 10 issues we found with this model, we tried to mitigate, but only got 80% of the way there. We think it's important to still release anyways, and this is definitely not profit motivated." I think it's because Anthropic is run by effective altruism AI doomers and operates as an insular cult.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: