Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This exact scenario is what I described to a friend of mine who is an AI researcher. He was convinced that if we trained the AI on enough data, GPT-x would become sentient. My opinion was similar to yours. I felt like the hallucinating the AI does was insufficient in performing true extrapolating thought.

It turns out it isn’t just AIs that hallucinate; AI researchers do as well.



"researcher".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: