I don't understand how "safe" AI can raise that much money. If anything, they will have to spend double the time on red-teaming before releasing anything commercially. "Unsafe" AI seems much more profitable.
Unsafe AI would cause human extinction which is bad for shareholders because shareholders are human persons and/or corporations beneficially owned by humans.
Related to this, DAO's (decentralized autonomous organizations which do not have human shareholders) are intrinsically dangerous, because they can benefit their fiduciary duty even if it involves causing all humans to die. E.g., if the machine faction in The Matrix were to exist within the framework of US laws, it would probably be a DAO.
There's no legal structure that has that level of fiduciary duty to anything. Corporations don't even really have fiduciary duty to their shareholders, and no CEO thinks they do.
The idea behind "corporations should only focus on returns to shareholders" is that if you let them do anything else, CEOs will just set whatever targets they want, and it makes it harder to judge if they're doing the right thing or if they're even good at it. It's basically reducing corporate power in that sense.
> E.g., if the machine faction in The Matrix were to exist within the framework of US laws, it would probably be a DAO.
That'd have to be a corporation with a human lawyer as the owner or something. No such legal concept as a DAO that I'm aware of.
Safe super-intelligence will likely be as safe as OpenAI is open.
We can’t build critical software without huge security holes and bugs (see crowdstrike) but we think we will be able to contain something smarter than us? It would only take one vulnerability.
You are not wrong. But Crowdstrike comparison is not “IT” they should have never had direct kernel access. MS set themself up for that one. SSI or whatever the hype will be in the coming future, it would be very difficult to beat. Unless of you shut down the power. It could develop guard rails instantly. So any flaw you may come up with, it would be instantly patched. Ofc this is just my take.
We don’t know the counter factual here… maybe if he called it “Unsafe Superintelligence Inc” they would have raised 5x! (though I have doubts about that)
"Safe" means "aligned with the people controlling it". A powerful superhuman AI that blindly obeys would be incredibly valuable to any wannabe authoritarian or despot.
I mean, no, that's not what it means. It might be what we get, but not because "safety" is defined insanely, only because safety is extremely difficult and might be impossible.