Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Congrats on the launch! I'm excited about AI use cases in the game industry and I believe that things like text-to-asset or text-to-NPC will end up being part of every major game engine.

You mentioned you've been working on a consumer gen AI product back in your YC batch 4.5 years ago, which was "pre AI-hype". Do you mind telling a bit more about your journey and pivot?



Thank you! We're so excited about how llms are expanding widely the number of people who can write software, it's the perfect opportunity to apply it to gaming since I think there's far more people who can make games if not limited to technical skills!

Regarding my path with Rosebud, it's such a long journey, I'd love to share. Going way back in 2017 I just finished up a PhD in Deep Learning from Berkeley and did the less usual thing of joining an early stage venture firm. I've always had to compartmentalize my creative work from my technical before. Once the first batch of interesting results in generative came out, primarily in the image space (cycleGAN, StyleGAN …etc), I realized that it was inevitable that most creative work can be changed completely by this research. Who gets to create, how easily they can create and even what is created will change dramatically. I just had very strong conviction on this. The only thing that was hard to predict was whether this was going to happen in the next 2-10 years. Regardless of the uncertainty in when this was going happen, I knew that I wanted to get my hands dirty and build, because it’s just something I wanted to use. So I jumped into the deep-end and founded Rosebud.

Earlier on, the models were quite new and not of the highest quality, so I learned quickly that consumers are a much better audience than using this for businesses. Consumers are still very picky, but also very experimental, whereas business had very specific requirements for what they want that are not easily “replaced” by AI generated stuff. Given these learnings, we iterated many times with different consumer mobile apps, which is a crucible for getting the right intuitive interface down. Long story short, you cannot go viral organically on mobile if your app is hard to use and doesn’t give a sense of magic. My core thesis is that the strength of generative AI is in its potential to make creation widely accessible, on demand and delightful. Naturally, some of our earlier apps were focused on more meme like creation experiences (like Tokkingheads, which organically grew to several million users). We learned that we must in a few clicks as possible allow the creator to achieve impressive results they want to share on social. These experiences helped build a lot of our technical ability to train and productionize models (back then there were no AI inference companies, so we had to manage everything ourselves on cloud or our own machines).

Every year since the founding of Rosebud, I looked at gaming as a target application area. As mentioned, Rosebud was named after the cheat code in the Sims, which allowed me to effectively use the game as a 3D playground to build virtual worlds back when I was a kid. This was my North Star for how powerful generative AI should get, to ultimately allow users to build their immersive virtual worlds and games. I also knew that just doing game asset generation was not enough to be interesting of a platform shift for games. Was waiting on code gen to get good enough to really have an opportunity to change how game dev and creation happens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: