Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with AI-generated content is not necessarily that it's bad, rather, it's not novel information. To learn something, you must not already know it. If it's AI-generated, the AI already knows it.


How much work do individual humans do that could be considered genuinely truly novel? I measure the answer to be "almost none."


That's true to some extent, but training on synthetic content is big these days:

https://importai.substack.com/p/import-ai-369-conscious-mach...


We might also say the same thing about spelling and grammar checkers. The difference will be in the quality of oversight of the tool. The "AI generated drivel" has minimum oversight.

Example: I have a huge number of perplexity.ai search/research threads, but the ones I share with my colleagues are a product of selection bias. Some of my threads are quite useless, much like a web search that was a dud. Those do not get shared.

Likewise, if I use LLM to draft passages or even act as something like an overgrown thesaurus, I do find I have to make large changes. But some of the material stays intact. Is it AI, or not AI? It's bit of both. Sometimes my editing is heavyhanded, other times, less so, but in all cases, I checked the output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: