Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Interview with OpenAI Cofounder Greg Brockman (forbes.com/sites/peterhigh)
26 points by xzzulz on April 18, 2016 | hide | past | favorite | 6 comments


> We want to be an organization that is clear that we are focused on one thing: the benefit for humanity. Let us make sure that the actions we take are the things that we think will maximize that.

I wonder what their process for evaluating potential actions is.


One topic I've never seen covered in any depth is how to insure that humanity is not a threat to AI, especially given the common view that AI has the potiental to be a threat to man.

(Just to be clear, my reference to AI is not singlural, but plural.)


http://spectrum.ieee.org/automaton/robotics/artificial-intel...

Here is an article about how some scientists taught robots how to protect themselves from sadistic humans (i.e. little kids).

That being said, robots do not really care what happens to them. If you program a robot to go in an infinite loop, sure it might get physically hurt, but it won't complain about it. The only reasons you would express concern about a robot's welfare is because you either show empathy to it or because you view the robot as your property and you don't want your property getting damaged.


No forbes, I refuse to turn off my ad blocker.

Anyone have the text to the article?



Yes very annoying, you can however close the tab and click the link again. But then they break the article in to 6 separate pages.. yay :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: