Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AaronSw exfiltrated data without authorization. You can argue the morality of that, but I think you could make the argument for OpenAI as well. I'm not opining on either, just pointing out the marked similarity here.

edit: It appears I'm wrong. Will someone correct me on what he did?



Arguing for the morality of OpenAI is a little bit harder given their history and actions in the last few years.


One argument would be means to an end, with the end being the initial advancement of AI.

Again, I'm not offering an opinion on it.


This is an argument, but isn't this where your scenario diverges completely? OpenAI's "means to an end" is further than you state; not initial advancement but the control and profit from AI.


Yes, they intended for control and profit, but it's looking like they can't keep it under control and ultimately its advancements will be available more broadly.

So, the argument goes that despite its intention, OpenAI has been one of the largest drivers of innovation in an emerging technology.


> edit: It appears I'm wrong. Will someone correct me on what he did?

He didn't do it without authorization.

https://en.wikipedia.org/wiki/Aaron_Swartz

> Visitors to MIT's "open campus" were authorized to access JSTOR through its network.


At that same link is an account of the unlawful activity. He was not authorized to access a restricted area, set up a sieve on the network, and collect the contents of JSTOR for outside distribution.


He wasn't authorised to access the wiring closet. There are many troubling things about the case, but it's fairly clear Aaron knew he was doing something he wasn't authorised to do.


> He wasn't authorised to access the wiring closet.

For which MIT can certainly have a) locked the door and b) trespassed him, but that's a very different issue than having authorization to access JSTOR.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: