Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agree with you.

I heard someone say that LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert. A lot of people don't have access to mental health care, and will ask their chatbot to ask like a psychologist.





>[...] LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert.

This mostly makes sense.

The problem is that people will take what you've said to mean "If I have no access to a therapist, at least I can access an LLM", with a default assumption that something is better than nothing. But this quickly breaks down when the sycophantic LLM encourages you to commit suicide, or reinforces your emerging psychosis, etc. Speaking to nobody is better than speaking to something that is actively harmful.


All very true. This is why I think the concern about harm reduction and alignment is very important, despite people on HN commonly scoffing about LLM "safety".

Is that not the goal of the project we are commenting under? To create an evaluation framework for LLM's so they aren't encouraging suicide, psychosis, or being actively harmful.

Sure, yeah. I'm responding to the comment that I directly replied to, though.

I've heard people say the same thing ("LLMs don't need to be as good as an expert to be useful, they just need to be better than your best available expert"), and I also know that some people assume that LLMs are, by default, better than nothing. Hence my comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: