Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>And with such a size, you have more compute than OpenAI and can train a frontier misaligned model or modify an open source one

That doesn't seem like a reasonable conclusion based on Open AI and leading lab expenditures. They need training data too, would one group actually be able to amass all of this? If you hijack 30 million random computers that's not as nearly as useful as 300K B200 GPUS for model training. Am I missing something?



It's a lot, but you're right that the more realistic scenario is just running an existing open source model - sadly those are trivial to misalign

So we better have SOTA clouds: people don't update computers for months and as Bredolab shows - their GPUs are up for grabs

Ideally and int. org. will do it but they are slow and countries don't cooperate enough, so a startup is more realistic




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: