It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.
If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)
It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.
Dan makes a case for being charitable to the commenter and how lame it is to neener-neener into the past, not that it has some opposite meaning everyone is missing out on.
Dan clearly references how people misunderstand not only the comment (“he didn't mean the software. He meant their YC application”) but also the whole interaction (“He wasn't being a petty nitpicker—he was earnestly trying to help, and you can see in how sweetly he replied to Drew there that he genuinely wanted them to succeed”).
So yes, it is the opposite of why people link to it (which is a judgement I’m making, I’m not arguing Dan has that exact sentiment), which is to mock an attitude (which wasn’t there) of hubris and lack of understanding of what makes a good product.
The comment isn't infamous because it was petty or nitpicking. It's because the comment was so poorly communicated and because the author was so profoundly out-of-touch with the average person that they had lost all perspective.
It's why it caught the zeitgeist at the time and why it's still apropos in this conversation now.
> It's because the comment was so poorly communicated and because the author was so profoundly out-of-touch with the average person that they had lost all perspective.
None of those things are true. Which is the point I’m making. Go read the original conversation. All of it.
It is absurd to claim that someone who quickly understood the explanation, learned from it, conceded where they were wrong, is somehow “profoundly out-of-touch” and “lost all perspective”. It’s the exact opposite.
I agree with Dan that we’d be lucky if all conversations were like that.
> If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations.
Curiously, literally nobody on earth uses this workflow.
People must be in complete denial to pretend that LLM (re)search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.
The reason why people don't use LLMs to "trivially save hours or days of work" is because LLMs don't do that. People would use a tool that works. This should be evidence that the tools provide no exceptional benefit, why do you think that is not true?
The only way LLM search engines save time is if you take what it says at face value as truth. Otherwise you still have to fact check whatever it spews out which is the actual time consuming part of doing proper research.
Frankly I've seen enough dangerous hallucinations from LLM search engines to immediately discard anything it says.
How is verification faster and easier? Normally you would check an article's citations to verify its claims, which still takes a lot of work, but an LLM can't cite its sources (it can fabricate a plausible list of fake citations, but this is not the same thing), so verification would have to involve searching from scratch anyway.
As I said, how are you going to check the source when LLMs can't provide sources? The models, as far as I know, don't store links to sources along with each piece of knowledge. At best they can plagiarize a list of references from the same sources as the rest of the text, which will by coincidence be somewhat accurate.
When talking about LLMs as search engine replacements, I think the stark difference in utility people see stems from the usecase. Are you perhaps talking about using it for more "deep research"?
Because when I ask chatgpt/perplexity things like "can I microwave a whole chicken" or "is Australia bigger than the moon" it will happily google for the answers and give me links to the sites it pulled from for me to verify for myself.
On the other hand, if you ask it to summarize the state-of-the art in quantum computing or something, it's much more likely to speak "off the top of its head", and even when it pulls in knowledge from web searches it'll rely much more on it's own "internal corpus" to put together an answer, which is definitely likely to contain hallucinations and obviously has no "source" aside from "it just knowing"(which it's discouraged from saying so it makes up sources if you ask for them).
I haven't had a source invented in quite some time now.
If anything, I have the opposite problem. The sources are the best part. I have such a mountain of papers to read from my LLM deep searches that the challenge is in figuring out how to get through and organize all the information.
For most things, no it isn’t. The reason it can work well at all for software is that it’s often (though not always) easy to validate the results. But for giving you a summary of some topic, no, it’s actually very hard to verify the results without doing all the work over again.
If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)
It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.