That's not been my experience, though I guess it depends on what you're using o1 for.
My experience is that o1 is extremely good at producing a series of logical steps for things. Ask it a simple question and it will write you what feels like an entire manual that you never asked for. For the most part I've stopped caring about integrating AI into software, but I could see o1 being good for writing prompts for another LLM. Beyond that, I have a hard time calling it better than GPT-4+.
lots of coding tasks, discussions about physics/QM. I find that it produces better quality answers than 4o, which often will have subtle but simple mistakes.
Even writing, where it is supposed to be worse than 4O, I feel that is does better/has a more solid understanding of provided documents.
Interesting, could you share an example of this where it provides something of value? I've tried asking a few different LLMs to explain renormalization group theory, and it always goes off the rails in five questions or less.
My experience is that o1 is extremely good at producing a series of logical steps for things. Ask it a simple question and it will write you what feels like an entire manual that you never asked for. For the most part I've stopped caring about integrating AI into software, but I could see o1 being good for writing prompts for another LLM. Beyond that, I have a hard time calling it better than GPT-4+.
How have you been using o1?