Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I learned Rust, with great help from ChatGPT-4.

If I can learn from AI-generated content, then I totally believe that AI can too.



The problem with AI-generated content is not necessarily that it's bad, rather, it's not novel information. To learn something, you must not already know it. If it's AI-generated, the AI already knows it.


How much work do individual humans do that could be considered genuinely truly novel? I measure the answer to be "almost none."


That's true to some extent, but training on synthetic content is big these days:

https://importai.substack.com/p/import-ai-369-conscious-mach...


We might also say the same thing about spelling and grammar checkers. The difference will be in the quality of oversight of the tool. The "AI generated drivel" has minimum oversight.

Example: I have a huge number of perplexity.ai search/research threads, but the ones I share with my colleagues are a product of selection bias. Some of my threads are quite useless, much like a web search that was a dud. Those do not get shared.

Likewise, if I use LLM to draft passages or even act as something like an overgrown thesaurus, I do find I have to make large changes. But some of the material stays intact. Is it AI, or not AI? It's bit of both. Sometimes my editing is heavyhanded, other times, less so, but in all cases, I checked the output.


You are assuming that you and AI are the same sort of thing.

I do not think we are at that point yet. In the meantime, the idea that we might get to intelligence by feeding in more data might get choked out by poisoned data.

I have a suspicion that there's a bit more to it than just more data though.


AI does not 'learn' like a human.


I learned.. If I can… then I totally…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: