Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not been my experience, though I guess it depends on what you're using o1 for.

My experience is that o1 is extremely good at producing a series of logical steps for things. Ask it a simple question and it will write you what feels like an entire manual that you never asked for. For the most part I've stopped caring about integrating AI into software, but I could see o1 being good for writing prompts for another LLM. Beyond that, I have a hard time calling it better than GPT-4+.

How have you been using o1?



lots of coding tasks, discussions about physics/QM. I find that it produces better quality answers than 4o, which often will have subtle but simple mistakes.

Even writing, where it is supposed to be worse than 4O, I feel that is does better/has a more solid understanding of provided documents.


> discussions about physics/QM

Interesting, could you share an example of this where it provides something of value? I've tried asking a few different LLMs to explain renormalization group theory, and it always goes off the rails in five questions or less.


sorry missed your reply. I realized most of the stuff I was asking about QM was actually to 4o, it’s mostly stuff about things covered in this book https://en.wikipedia.org/wiki/Quantum_Field_Theory_in_a_Nuts...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: