Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>>> the entire issue description contains so much unneeded (and probably incorrect) information that it'd be better if they just provided their LLM prompt as an issue instead

When it's put this way, it seems a lot like the problem of people walking into doctors' offices with certainty that they know their own diagnosis after reading stuff on Reddit and WebMD.

What this post actually amounts to, indirectly, is a plea to trust human expertise in a particular domain instead of assuming that a layperson armed with random web pickings has the same chance as an expert at accurately diagnosing the problem. This wastes the expert's time and just increases mistrust.

The exceptions where Reddit solves something that a doctor failed to solve are what infuse the idea of lay online folk wisdom with merit, for people desperately looking for answers and cures. Makes it impossible to impose a blanket rule that we should trust experts, who are fallible as well.

The problem is societal. It's that if you erode trust in learned expertise long enough, you end up with a chaos of misinformation that makes it impossible to find a real answer.

A friend of mine who died of lung cancer recently, in his last days became convinced that he'd gotten it because of the covid vaccine (despite being a lifelong smoker, whose father had died of it at 41). And in every individual case you say, well, I don't want to disabuse someone of the fantasy they've landed on.

This is a devastatingly bad way to raise a generation, though. Short-circuiting one's own logic and handing it over to non-deterministic machines, or randos online... how do we expect this to end?



With great profit for the oligarchs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: