Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a ridiculous hyperbole. Translation and search alone are examples of wildly successful applications of generative AI that a lot of people actually want because it makes the experience qualitatively better. This is pretty evident if you look at normal people shifting to ChatGPT and other AI-powered search from the engines like Google. Recent AI mode in Google is a desperate attempt to stay relevant.


No, they’re examples of misuse of LLMs - “sometimes correct” is not a replacement for a search engine or a translator.

Remember, Google search used to actually find you things before they shifted to replacing results with “somewhat random but reads plausible” AI summaries.


Are we even living in the same universe?? In mine, Claude and Gemini Pro outperform classic machine translators by orders of magnitude, that's not an exaggeration. I can finally rely on correct machine translation when reading articles in languages I don't speak and when talking to people in their native language. They still miss some nuance in the informal talk, but I can be reasonably sure it adapts the cultural context pretty well, and tolerate the rest.

>replacing results with “somewhat random but reads plausible” AI summaries

I'm talking about actual deep (re)search that cites the sources, not simple summaries. For example I'm considering a KTM 890 Adventure R as my next motorcycle but the reliability and TCO are worrying. I've prompted and launched an agent to recursively scan YouTube travel videos, or rather the transcriptions, to look for actual issues with this bike, without all that KTM marketing bullshit and paid reviews, and provide me with timecodes. And it did, finding a ton of extremely non-obvious non-English channels in the process (Russian, Afrikaans, Spanish etc), scanning dozens of hours of videos, and providing timecodes for me to verify. That saved me insane amount of time.

Normal people actually pay money for this, I'm pretty surprised to see this in the wild but it's true. Reducing this to "techbros and influencers" is pure wishful thinking.


If you tell this to someone who has never used LLMs/AI they may be curious. I have though. I also understand how the technology works and that you will have to read those research papers yourself anyway, verify every source, check every fact (including the ones that got omitted). Maybe it’s better than previous gen machine translation, but you better not rely on context and subtle sentiments being translated as intended all the time.

If it’s important, it’s still better to do it yourself (or pay for the service of another human).


Maybe talk to the recently resigned Japanese translation team leader about how good AI translations are.


They are miles better than the ordinary machine translators. That's all that matters, because I can't afford a personal human translator to browse the web.

Human translation is obviously better, but not by much, especially on the web. I know because I'm testing LLMs for pretty complex translations all the time, in languages I understand well, and two persons occasionally communicate with me in my native language using an LLM. It's accurate enough to not have any troubles, especially if you don't prompt it naively and use a strong multilanguage model. It's not the same as slop generation, as the input is from a human.

That guy reacted to them ignoring him and overwriting his hard work with a worse version, which is terrible but not related to the point I'm making.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: