Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It'd be insane if OpenAI wasn't changing GPT-4. That kind of flat footedness would cost them their entire first mover advantage.


In that case, I'd hope they're changing it for the better, rather than making it more of an anodyne prude.


Maybe this increased prudishness is coming from the kinds of queries they are seeing come in...


Who is OpenAi to police their users' morality?

More guardrails on sensitive answers, fine.

But respect a user that explicitly (literally and figuratively) requests jailbreak and a specific type of response.


If “changing” means “making it worse” it can definitely cost them their entire first mover advantage.


Most likely they made it cheaper to run (faster), and tolerated some degree of change in the output.

It might have seem worth it to them but not the end user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: