Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.

I think this is somewhere between "sad" and "wtf."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: